Test Report: Hyper-V_Windows 18702

                    
                      7da1c16e9c0a3f17226e01717faf9df7d280508b:2024-04-21:34140
                    
                

Test fail (15/197)

x
+
TestAddons/parallel/Registry (84.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 19.9956ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bnb57" [55624ae1-7020-4c3a-afc6-ad88a84abcd2] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0187762s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lm7j7" [8dec9623-48be-4801-bd1f-1439be46c0f2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0126759s
addons_test.go:340: (dbg) Run:  kubectl --context addons-519700 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-519700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-519700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (16.5967147s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 ip: (2.5526061s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0421 18:31:03.865996   10596 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-519700 ip"
2024/04/21 18:31:06 [DEBUG] GET http://172.27.202.1:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 addons disable registry --alsologtostderr -v=1: (16.2516413s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-519700 -n addons-519700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-519700 -n addons-519700: (12.9170101s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 logs -n 25: (10.668385s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-841000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC |                     |
	|         | -p download-only-841000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| delete  | -p download-only-841000                                                                     | download-only-841000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| start   | -o=json --download-only                                                                     | download-only-510000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC |                     |
	|         | -p download-only-510000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| delete  | -p download-only-510000                                                                     | download-only-510000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| delete  | -p download-only-841000                                                                     | download-only-841000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| delete  | -p download-only-510000                                                                     | download-only-510000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-311400 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC |                     |
	|         | binary-mirror-311400                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:59571                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-311400                                                                     | binary-mirror-311400 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| addons  | enable dashboard -p                                                                         | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC |                     |
	|         | addons-519700                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC |                     |
	|         | addons-519700                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-519700 --wait=true                                                                | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-519700 addons                                                                        | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:30 UTC | 21 Apr 24 18:30 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-519700 ssh cat                                                                       | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:31 UTC | 21 Apr 24 18:31 UTC |
	|         | /opt/local-path-provisioner/pvc-4f455934-1b66-474d-b61f-c07d9fbf4635_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-519700 ip                                                                            | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:31 UTC | 21 Apr 24 18:31 UTC |
	| addons  | addons-519700 addons disable                                                                | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:31 UTC | 21 Apr 24 18:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-519700 addons disable                                                                | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:31 UTC | 21 Apr 24 18:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-519700 addons disable                                                                | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:31 UTC | 21 Apr 24 18:31 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-519700 addons                                                                        | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:31 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:31 UTC |                     |
	|         | addons-519700                                                                               |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-519700        | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:31 UTC |                     |
	|         | addons-519700                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:23:53
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:23:53.777224   13136 out.go:291] Setting OutFile to fd 820 ...
	I0421 18:23:53.778393   13136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:23:53.778393   13136 out.go:304] Setting ErrFile to fd 780...
	I0421 18:23:53.778393   13136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:23:53.803939   13136 out.go:298] Setting JSON to false
	I0421 18:23:53.808828   13136 start.go:129] hostinfo: {"hostname":"minikube6","uptime":9709,"bootTime":1713714124,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 18:23:53.809384   13136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 18:23:53.818980   13136 out.go:177] * [addons-519700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 18:23:53.823427   13136 notify.go:220] Checking for updates...
	I0421 18:23:53.826050   13136 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:23:53.833841   13136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:23:53.836396   13136 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 18:23:53.842025   13136 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:23:53.845771   13136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:23:53.849438   13136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:23:59.545404   13136 out.go:177] * Using the hyperv driver based on user configuration
	I0421 18:23:59.550262   13136 start.go:297] selected driver: hyperv
	I0421 18:23:59.550262   13136 start.go:901] validating driver "hyperv" against <nil>
	I0421 18:23:59.550262   13136 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:23:59.604440   13136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:23:59.605922   13136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:23:59.606079   13136 cni.go:84] Creating CNI manager for ""
	I0421 18:23:59.606079   13136 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 18:23:59.606156   13136 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 18:23:59.606309   13136 start.go:340] cluster config:
	{Name:addons-519700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-519700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:23:59.606765   13136 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:23:59.610259   13136 out.go:177] * Starting "addons-519700" primary control-plane node in "addons-519700" cluster
	I0421 18:23:59.614492   13136 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 18:23:59.614931   13136 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 18:23:59.614980   13136 cache.go:56] Caching tarball of preloaded images
	I0421 18:23:59.615278   13136 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 18:23:59.615527   13136 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 18:23:59.616634   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\config.json ...
	I0421 18:23:59.616743   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\config.json: {Name:mk481e048d591020a587406dd85e277bae03b4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:59.618361   13136 start.go:360] acquireMachinesLock for addons-519700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:23:59.618581   13136 start.go:364] duration metric: took 114.2µs to acquireMachinesLock for "addons-519700"
	I0421 18:23:59.618840   13136 start.go:93] Provisioning new machine with config: &{Name:addons-519700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:addons-519700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 18:23:59.618964   13136 start.go:125] createHost starting for "" (driver="hyperv")
	I0421 18:23:59.621732   13136 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0421 18:23:59.621732   13136 start.go:159] libmachine.API.Create for "addons-519700" (driver="hyperv")
	I0421 18:23:59.621732   13136 client.go:168] LocalClient.Create starting
	I0421 18:23:59.623887   13136 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 18:23:59.838991   13136 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 18:24:00.214808   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 18:24:02.671633   13136 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 18:24:02.671844   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:02.671844   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 18:24:04.475030   13136 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 18:24:04.475030   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:04.475165   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 18:24:06.052177   13136 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 18:24:06.052470   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:06.052470   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 18:24:09.964910   13136 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 18:24:09.964910   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:09.967490   13136 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 18:24:10.498768   13136 main.go:141] libmachine: Creating SSH key...
	I0421 18:24:10.727675   13136 main.go:141] libmachine: Creating VM...
	I0421 18:24:10.727675   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 18:24:13.613091   13136 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 18:24:13.613091   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:13.613091   13136 main.go:141] libmachine: Using switch "Default Switch"
	I0421 18:24:13.613822   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 18:24:15.447443   13136 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 18:24:15.447443   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:15.447585   13136 main.go:141] libmachine: Creating VHD
	I0421 18:24:15.447585   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 18:24:19.207693   13136 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : BD58E6E6-9EBB-4312-9941-B272824CBBC2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 18:24:19.207766   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:19.207766   13136 main.go:141] libmachine: Writing magic tar header
	I0421 18:24:19.207989   13136 main.go:141] libmachine: Writing SSH key tar header
	I0421 18:24:19.210931   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 18:24:22.467534   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:22.467752   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:22.467752   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\disk.vhd' -SizeBytes 20000MB
	I0421 18:24:25.045355   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:25.045795   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:25.045853   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-519700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0421 18:24:28.945559   13136 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-519700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 18:24:28.946333   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:28.946420   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-519700 -DynamicMemoryEnabled $false
	I0421 18:24:31.182919   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:31.182919   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:31.183899   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-519700 -Count 2
	I0421 18:24:33.437905   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:33.437905   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:33.438213   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-519700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\boot2docker.iso'
	I0421 18:24:36.044458   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:36.044788   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:36.044877   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-519700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\disk.vhd'
	I0421 18:24:38.772135   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:38.772135   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:38.772135   13136 main.go:141] libmachine: Starting VM...
	I0421 18:24:38.772887   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-519700
	I0421 18:24:41.996656   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:41.996656   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:41.996656   13136 main.go:141] libmachine: Waiting for host to start...
	I0421 18:24:41.996656   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:24:44.257830   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:24:44.257830   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:44.258174   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:24:46.890720   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:46.891606   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:47.905821   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:24:50.122508   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:24:50.122508   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:50.122572   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:24:52.676076   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:52.676115   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:53.681568   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:24:55.918038   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:24:55.918801   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:55.918801   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:24:58.496389   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:24:58.497404   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:24:59.511456   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:01.684523   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:01.685380   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:01.685497   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:04.271080   13136 main.go:141] libmachine: [stdout =====>] : 
	I0421 18:25:04.272098   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:05.273366   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:07.505121   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:07.505121   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:07.505698   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:10.177502   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:10.177629   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:10.177629   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:12.295921   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:12.295991   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:12.295991   13136 machine.go:94] provisionDockerMachine start ...
	I0421 18:25:12.296048   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:14.438491   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:14.439092   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:14.439215   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:16.993223   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:16.993223   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:17.000032   13136 main.go:141] libmachine: Using SSH client type: native
	I0421 18:25:17.011473   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.202.1 22 <nil> <nil>}
	I0421 18:25:17.011473   13136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 18:25:17.135865   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 18:25:17.135983   13136 buildroot.go:166] provisioning hostname "addons-519700"
	I0421 18:25:17.135983   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:19.299406   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:19.299406   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:19.300317   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:21.869472   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:21.869472   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:21.876619   13136 main.go:141] libmachine: Using SSH client type: native
	I0421 18:25:21.877312   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.202.1 22 <nil> <nil>}
	I0421 18:25:21.877312   13136 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-519700 && echo "addons-519700" | sudo tee /etc/hostname
	I0421 18:25:22.028641   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-519700
	
	I0421 18:25:22.029213   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:24.207692   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:24.208032   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:24.208089   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:26.809118   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:26.809118   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:26.814651   13136 main.go:141] libmachine: Using SSH client type: native
	I0421 18:25:26.815229   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.202.1 22 <nil> <nil>}
	I0421 18:25:26.815229   13136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-519700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-519700/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-519700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:25:26.962720   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:25:26.962793   13136 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 18:25:26.962905   13136 buildroot.go:174] setting up certificates
	I0421 18:25:26.962905   13136 provision.go:84] configureAuth start
	I0421 18:25:26.962905   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:29.086661   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:29.086661   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:29.086661   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:31.671697   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:31.671697   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:31.671697   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:33.780580   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:33.780580   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:33.780976   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:36.396794   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:36.396986   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:36.396986   13136 provision.go:143] copyHostCerts
	I0421 18:25:36.396986   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 18:25:36.398334   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 18:25:36.400143   13136 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 18:25:36.401249   13136 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-519700 san=[127.0.0.1 172.27.202.1 addons-519700 localhost minikube]
	I0421 18:25:36.818110   13136 provision.go:177] copyRemoteCerts
	I0421 18:25:36.832387   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:25:36.832387   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:38.961234   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:38.961759   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:38.961818   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:41.567017   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:41.567017   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:41.568060   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:25:41.676489   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8440682s)
	I0421 18:25:41.677089   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:25:41.727203   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 18:25:41.779726   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 18:25:41.832739   13136 provision.go:87] duration metric: took 14.8697319s to configureAuth
	I0421 18:25:41.832739   13136 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:25:41.833495   13136 config.go:182] Loaded profile config "addons-519700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 18:25:41.833495   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:43.988346   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:43.988346   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:43.988897   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:46.612753   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:46.612987   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:46.622611   13136 main.go:141] libmachine: Using SSH client type: native
	I0421 18:25:46.623457   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.202.1 22 <nil> <nil>}
	I0421 18:25:46.623457   13136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 18:25:46.752923   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 18:25:46.752923   13136 buildroot.go:70] root file system type: tmpfs
	I0421 18:25:46.752923   13136 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 18:25:46.752923   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:48.839678   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:48.839678   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:48.839678   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:51.347213   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:51.347744   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:51.354227   13136 main.go:141] libmachine: Using SSH client type: native
	I0421 18:25:51.354992   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.202.1 22 <nil> <nil>}
	I0421 18:25:51.354992   13136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 18:25:51.506291   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 18:25:51.506291   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:25:53.629683   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:25:53.629683   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:53.630352   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:25:56.279810   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:25:56.280959   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:25:56.287986   13136 main.go:141] libmachine: Using SSH client type: native
	I0421 18:25:56.287986   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.202.1 22 <nil> <nil>}
	I0421 18:25:56.287986   13136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 18:25:58.555705   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 18:25:58.555705   13136 machine.go:97] duration metric: took 46.2593395s to provisionDockerMachine
	I0421 18:25:58.555705   13136 client.go:171] duration metric: took 1m58.9319395s to LocalClient.Create
	I0421 18:25:58.555819   13136 start.go:167] duration metric: took 1m58.9332686s to libmachine.API.Create "addons-519700"
	I0421 18:25:58.555866   13136 start.go:293] postStartSetup for "addons-519700" (driver="hyperv")
	I0421 18:25:58.555866   13136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:25:58.568703   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:25:58.569720   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:26:00.675318   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:26:00.675318   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:00.676247   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:26:03.269427   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:26:03.270278   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:03.270501   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:26:03.374454   13136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8057184s)
	I0421 18:26:03.389155   13136 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:26:03.396980   13136 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:26:03.396980   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 18:26:03.397331   13136 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 18:26:03.397331   13136 start.go:296] duration metric: took 4.8414326s for postStartSetup
	I0421 18:26:03.400101   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:26:05.529465   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:26:05.529595   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:05.529595   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:26:08.125944   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:26:08.126140   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:08.126367   13136 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\config.json ...
	I0421 18:26:08.130312   13136 start.go:128] duration metric: took 2m8.5104647s to createHost
	I0421 18:26:08.130482   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:26:10.216040   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:26:10.216040   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:10.216295   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:26:12.768521   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:26:12.768610   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:12.775860   13136 main.go:141] libmachine: Using SSH client type: native
	I0421 18:26:12.776572   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.202.1 22 <nil> <nil>}
	I0421 18:26:12.776572   13136 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:26:12.913158   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713723972.928762417
	
	I0421 18:26:12.913158   13136 fix.go:216] guest clock: 1713723972.928762417
	I0421 18:26:12.913158   13136 fix.go:229] Guest: 2024-04-21 18:26:12.928762417 +0000 UTC Remote: 2024-04-21 18:26:08.1303902 +0000 UTC m=+134.553515601 (delta=4.798372217s)
	I0421 18:26:12.913292   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:26:15.041615   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:26:15.042277   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:15.042277   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:26:17.572699   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:26:17.572699   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:17.579022   13136 main.go:141] libmachine: Using SSH client type: native
	I0421 18:26:17.579811   13136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.202.1 22 <nil> <nil>}
	I0421 18:26:17.579811   13136 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713723972
	I0421 18:26:17.727275   13136 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 18:26:12 UTC 2024
	
	I0421 18:26:17.727339   13136 fix.go:236] clock set: Sun Apr 21 18:26:12 UTC 2024
	 (err=<nil>)
	I0421 18:26:17.727339   13136 start.go:83] releasing machines lock for "addons-519700", held for 2m18.107807s
	I0421 18:26:17.727556   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:26:19.891340   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:26:19.891340   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:19.891340   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:26:22.451471   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:26:22.451471   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:22.456305   13136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:26:22.456448   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:26:22.471764   13136 ssh_runner.go:195] Run: cat /version.json
	I0421 18:26:22.471764   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:26:24.670666   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:26:24.670666   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:24.671338   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:26:24.697941   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:26:24.698325   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:24.698325   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:26:27.398735   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:26:27.399806   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:27.399806   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:26:27.420297   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:26:27.421307   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:26:27.421927   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:26:27.491184   13136 ssh_runner.go:235] Completed: cat /version.json: (5.0192464s)
	I0421 18:26:27.507173   13136 ssh_runner.go:195] Run: systemctl --version
	I0421 18:26:27.635951   13136 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1794665s)
	I0421 18:26:27.648694   13136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 18:26:27.657877   13136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:26:27.670264   13136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:26:27.699910   13136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 18:26:27.699980   13136 start.go:494] detecting cgroup driver to use...
	I0421 18:26:27.699980   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:26:27.750023   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 18:26:27.783562   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 18:26:27.803331   13136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 18:26:27.816955   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 18:26:27.857389   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 18:26:27.904774   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 18:26:27.940083   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 18:26:27.972752   13136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:26:28.012624   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 18:26:28.054687   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 18:26:28.089819   13136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 18:26:28.124139   13136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:26:28.157367   13136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:26:28.199042   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:26:28.434450   13136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 18:26:28.471889   13136 start.go:494] detecting cgroup driver to use...
	I0421 18:26:28.487515   13136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 18:26:28.529627   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:26:28.567342   13136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:26:28.610546   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:26:28.653547   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 18:26:28.695117   13136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 18:26:28.768258   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 18:26:28.797524   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:26:28.850750   13136 ssh_runner.go:195] Run: which cri-dockerd
	I0421 18:26:28.871618   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 18:26:28.889797   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 18:26:28.940496   13136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 18:26:29.151075   13136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 18:26:29.346491   13136 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 18:26:29.346491   13136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 18:26:29.398433   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:26:29.599913   13136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 18:26:32.176678   13136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5765158s)
	I0421 18:26:32.193133   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 18:26:32.233486   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 18:26:32.271367   13136 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 18:26:32.497799   13136 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 18:26:32.711047   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:26:32.936161   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 18:26:32.978702   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 18:26:33.016668   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:26:33.215898   13136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 18:26:33.328620   13136 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 18:26:33.342649   13136 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 18:26:33.354634   13136 start.go:562] Will wait 60s for crictl version
	I0421 18:26:33.367603   13136 ssh_runner.go:195] Run: which crictl
	I0421 18:26:33.389666   13136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:26:33.447456   13136 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 18:26:33.457016   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 18:26:33.501214   13136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 18:26:33.540525   13136 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 18:26:33.540845   13136 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 18:26:33.545264   13136 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 18:26:33.545264   13136 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 18:26:33.545264   13136 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 18:26:33.545264   13136 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 18:26:33.548953   13136 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 18:26:33.548953   13136 ip.go:210] interface addr: 172.27.192.1/20
	I0421 18:26:33.565463   13136 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 18:26:33.572997   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:26:33.597453   13136 kubeadm.go:877] updating cluster {Name:addons-519700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-519700
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.202.1 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 18:26:33.597774   13136 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 18:26:33.606890   13136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 18:26:33.629520   13136 docker.go:685] Got preloaded images: 
	I0421 18:26:33.629520   13136 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0421 18:26:33.640125   13136 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0421 18:26:33.671647   13136 ssh_runner.go:195] Run: which lz4
	I0421 18:26:33.691669   13136 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 18:26:33.702239   13136 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 18:26:33.703144   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0421 18:26:35.720935   13136 docker.go:649] duration metric: took 2.042542s to copy over tarball
	I0421 18:26:35.732900   13136 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 18:26:40.994190   13136 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.2611922s)
	I0421 18:26:40.994190   13136 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 18:26:41.064828   13136 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0421 18:26:41.085264   13136 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0421 18:26:41.135346   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:26:41.364424   13136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 18:26:47.008883   13136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6444187s)
	I0421 18:26:47.018760   13136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 18:26:47.046839   13136 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0421 18:26:47.046909   13136 cache_images.go:84] Images are preloaded, skipping loading
	I0421 18:26:47.046909   13136 kubeadm.go:928] updating node { 172.27.202.1 8443 v1.30.0 docker true true} ...
	I0421 18:26:47.046909   13136 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-519700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.202.1
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-519700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:26:47.057446   13136 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0421 18:26:47.097237   13136 cni.go:84] Creating CNI manager for ""
	I0421 18:26:47.097237   13136 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 18:26:47.097237   13136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 18:26:47.097237   13136 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.202.1 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-519700 NodeName:addons-519700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.202.1"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.202.1 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 18:26:47.097237   13136 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.202.1
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-519700"
	  kubeletExtraArgs:
	    node-ip: 172.27.202.1
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.202.1"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 18:26:47.110257   13136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:26:47.131029   13136 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 18:26:47.144190   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 18:26:47.164899   13136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0421 18:26:47.202491   13136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:26:47.237903   13136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0421 18:26:47.286204   13136 ssh_runner.go:195] Run: grep 172.27.202.1	control-plane.minikube.internal$ /etc/hosts
	I0421 18:26:47.292211   13136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.202.1	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 18:26:47.330602   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:26:47.551312   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:26:47.583796   13136 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700 for IP: 172.27.202.1
	I0421 18:26:47.583872   13136 certs.go:194] generating shared ca certs ...
	I0421 18:26:47.583872   13136 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:47.584324   13136 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 18:26:47.785725   13136 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0421 18:26:47.785725   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:47.787375   13136 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0421 18:26:47.787375   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:47.788741   13136 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 18:26:47.966726   13136 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0421 18:26:47.966726   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:47.967718   13136 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0421 18:26:47.967718   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:47.968837   13136 certs.go:256] generating profile certs ...
	I0421 18:26:47.969557   13136 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.key
	I0421 18:26:47.969557   13136 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt with IP's: []
	I0421 18:26:48.202648   13136 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt ...
	I0421 18:26:48.202648   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: {Name:mk3601cddc7294366b0fe02e6ffaf8a51a0febe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:48.204410   13136 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.key ...
	I0421 18:26:48.204410   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.key: {Name:mk0658edfb37578d813735161fe76b574e7a4a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:48.204947   13136 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.key.1500e9e8
	I0421 18:26:48.205887   13136 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.crt.1500e9e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.202.1]
	I0421 18:26:48.398311   13136 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.crt.1500e9e8 ...
	I0421 18:26:48.398311   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.crt.1500e9e8: {Name:mk563b099e3b3515677b9758a476f0db6520442d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:48.399409   13136 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.key.1500e9e8 ...
	I0421 18:26:48.399409   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.key.1500e9e8: {Name:mke50bf744fd4dc66defcc1c53da4af73325fe60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:48.400485   13136 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.crt.1500e9e8 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.crt
	I0421 18:26:48.412580   13136 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.key.1500e9e8 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.key
	I0421 18:26:48.413228   13136 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\proxy-client.key
	I0421 18:26:48.414259   13136 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\proxy-client.crt with IP's: []
	I0421 18:26:48.548709   13136 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\proxy-client.crt ...
	I0421 18:26:48.548709   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\proxy-client.crt: {Name:mk6c96bc576d5d53f9a33efe798ffeb5641ff39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:48.550534   13136 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\proxy-client.key ...
	I0421 18:26:48.550534   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\proxy-client.key: {Name:mk0403c7a5a180bdc1cf547fa659c519b5b78cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:26:48.560876   13136 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 18:26:48.561585   13136 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 18:26:48.561780   13136 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 18:26:48.561998   13136 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 18:26:48.563303   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:26:48.612791   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:26:48.661561   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:26:48.709505   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:26:48.756265   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0421 18:26:48.806479   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:26:48.858991   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:26:48.910848   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 18:26:48.962146   13136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:26:49.009205   13136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 18:26:49.060613   13136 ssh_runner.go:195] Run: openssl version
	I0421 18:26:49.086064   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:26:49.118319   13136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:26:49.127251   13136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:26:49.140590   13136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:26:49.163234   13136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:26:49.202457   13136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:26:49.209116   13136 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 18:26:49.209526   13136 kubeadm.go:391] StartCluster: {Name:addons-519700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-519700 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.202.1 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:26:49.220249   13136 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 18:26:49.257934   13136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 18:26:49.291484   13136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 18:26:49.324089   13136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 18:26:49.344359   13136 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 18:26:49.344452   13136 kubeadm.go:156] found existing configuration files:
	
	I0421 18:26:49.358258   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 18:26:49.377893   13136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 18:26:49.391055   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 18:26:49.427661   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 18:26:49.455501   13136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 18:26:49.469590   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 18:26:49.506786   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 18:26:49.525319   13136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 18:26:49.541455   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 18:26:49.575278   13136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 18:26:49.595692   13136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 18:26:49.609177   13136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 18:26:49.629315   13136 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 18:26:49.923037   13136 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 18:27:05.015092   13136 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 18:27:05.016089   13136 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 18:27:05.016089   13136 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 18:27:05.016089   13136 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 18:27:05.016089   13136 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 18:27:05.016089   13136 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 18:27:05.025091   13136 out.go:204]   - Generating certificates and keys ...
	I0421 18:27:05.025091   13136 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 18:27:05.025091   13136 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 18:27:05.025091   13136 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 18:27:05.025091   13136 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 18:27:05.025091   13136 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 18:27:05.025091   13136 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 18:27:05.025091   13136 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 18:27:05.026079   13136 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-519700 localhost] and IPs [172.27.202.1 127.0.0.1 ::1]
	I0421 18:27:05.026079   13136 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 18:27:05.026079   13136 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-519700 localhost] and IPs [172.27.202.1 127.0.0.1 ::1]
	I0421 18:27:05.026079   13136 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 18:27:05.026079   13136 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 18:27:05.026079   13136 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 18:27:05.027092   13136 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 18:27:05.027092   13136 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 18:27:05.027092   13136 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 18:27:05.027092   13136 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 18:27:05.027092   13136 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 18:27:05.027092   13136 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 18:27:05.027092   13136 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 18:27:05.028076   13136 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 18:27:05.051945   13136 out.go:204]   - Booting up control plane ...
	I0421 18:27:05.052496   13136 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 18:27:05.052673   13136 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 18:27:05.052806   13136 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 18:27:05.052806   13136 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 18:27:05.052806   13136 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 18:27:05.053348   13136 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 18:27:05.053677   13136 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 18:27:05.053927   13136 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 18:27:05.054102   13136 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.501646801s
	I0421 18:27:05.054277   13136 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 18:27:05.054430   13136 kubeadm.go:309] [api-check] The API server is healthy after 7.502902089s
	I0421 18:27:05.054744   13136 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 18:27:05.055141   13136 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 18:27:05.055316   13136 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 18:27:05.055674   13136 kubeadm.go:309] [mark-control-plane] Marking the node addons-519700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 18:27:05.055674   13136 kubeadm.go:309] [bootstrap-token] Using token: 0jdrro.cjr8r54fteby9xfd
	I0421 18:27:05.061750   13136 out.go:204]   - Configuring RBAC rules ...
	I0421 18:27:05.061921   13136 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 18:27:05.061921   13136 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 18:27:05.062596   13136 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 18:27:05.062596   13136 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 18:27:05.062596   13136 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 18:27:05.062596   13136 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 18:27:05.063509   13136 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 18:27:05.063509   13136 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 18:27:05.063509   13136 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 18:27:05.063509   13136 kubeadm.go:309] 
	I0421 18:27:05.063509   13136 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 18:27:05.063509   13136 kubeadm.go:309] 
	I0421 18:27:05.063509   13136 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 18:27:05.063509   13136 kubeadm.go:309] 
	I0421 18:27:05.063509   13136 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 18:27:05.064506   13136 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 18:27:05.064506   13136 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 18:27:05.064506   13136 kubeadm.go:309] 
	I0421 18:27:05.064506   13136 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 18:27:05.064506   13136 kubeadm.go:309] 
	I0421 18:27:05.064506   13136 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 18:27:05.064506   13136 kubeadm.go:309] 
	I0421 18:27:05.064506   13136 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 18:27:05.064506   13136 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 18:27:05.064506   13136 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 18:27:05.064506   13136 kubeadm.go:309] 
	I0421 18:27:05.065504   13136 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 18:27:05.065504   13136 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 18:27:05.065504   13136 kubeadm.go:309] 
	I0421 18:27:05.065504   13136 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0jdrro.cjr8r54fteby9xfd \
	I0421 18:27:05.065504   13136 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 \
	I0421 18:27:05.065504   13136 kubeadm.go:309] 	--control-plane 
	I0421 18:27:05.065504   13136 kubeadm.go:309] 
	I0421 18:27:05.065504   13136 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 18:27:05.066534   13136 kubeadm.go:309] 
	I0421 18:27:05.066534   13136 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0jdrro.cjr8r54fteby9xfd \
	I0421 18:27:05.066534   13136 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 
	I0421 18:27:05.066534   13136 cni.go:84] Creating CNI manager for ""
	I0421 18:27:05.066534   13136 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 18:27:05.073538   13136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 18:27:05.088168   13136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 18:27:05.117358   13136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 18:27:05.158039   13136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 18:27:05.173025   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:05.173025   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-519700 minikube.k8s.io/updated_at=2024_04_21T18_27_05_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=addons-519700 minikube.k8s.io/primary=true
	I0421 18:27:05.186075   13136 ops.go:34] apiserver oom_adj: -16
	I0421 18:27:05.350604   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:05.859132   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:06.366084   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:06.854457   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:07.357282   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:07.862857   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:08.362928   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:08.853758   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:09.365328   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:09.852991   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:10.352182   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:10.857320   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:11.358703   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:11.850928   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:12.361576   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:12.865106   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:13.353551   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:13.860235   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:14.363173   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:14.864414   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:15.350950   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:15.854564   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:16.360840   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:16.855512   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:17.360989   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:17.856467   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:18.356727   13136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 18:27:18.545223   13136 kubeadm.go:1107] duration metric: took 13.3870897s to wait for elevateKubeSystemPrivileges
	W0421 18:27:18.545223   13136 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 18:27:18.545223   13136 kubeadm.go:393] duration metric: took 29.3354908s to StartCluster
	I0421 18:27:18.545223   13136 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:27:18.545223   13136 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:27:18.548102   13136 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:27:18.550412   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 18:27:18.550822   13136 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.202.1 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 18:27:18.550822   13136 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0421 18:27:18.553787   13136 out.go:177] * Verifying Kubernetes components...
	I0421 18:27:18.551066   13136 addons.go:69] Setting gcp-auth=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting cloud-spanner=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting default-storageclass=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting inspektor-gadget=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting helm-tiller=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting ingress-dns=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting registry=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting metrics-server=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting yakd=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting volumesnapshots=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting storage-provisioner=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 addons.go:69] Setting ingress=true in profile "addons-519700"
	I0421 18:27:18.551066   13136 config.go:182] Loaded profile config "addons-519700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon helm-tiller=true in "addons-519700"
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon metrics-server=true in "addons-519700"
	I0421 18:27:18.557535   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-519700"
	I0421 18:27:18.553998   13136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-519700"
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon volumesnapshots=true in "addons-519700"
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon ingress-dns=true in "addons-519700"
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon inspektor-gadget=true in "addons-519700"
	I0421 18:27:18.557535   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-519700"
	I0421 18:27:18.557535   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon registry=true in "addons-519700"
	I0421 18:27:18.557535   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon ingress=true in "addons-519700"
	I0421 18:27:18.558067   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon yakd=true in "addons-519700"
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon cloud-spanner=true in "addons-519700"
	I0421 18:27:18.553998   13136 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-519700"
	I0421 18:27:18.558479   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.553998   13136 addons.go:234] Setting addon storage-provisioner=true in "addons-519700"
	I0421 18:27:18.559094   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.553998   13136 mustload.go:65] Loading cluster: addons-519700
	I0421 18:27:18.557535   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.557535   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.559537   13136 config.go:182] Loaded profile config "addons-519700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 18:27:18.559537   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.557535   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.558067   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.558479   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:18.562528   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.563560   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.565493   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.565730   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.566298   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.567018   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.567288   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.568446   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.568446   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.568880   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.569220   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.569220   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.569767   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.569882   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:18.582545   13136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:27:19.455887   13136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 18:27:19.665908   13136 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.0833555s)
	I0421 18:27:19.730674   13136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:27:21.214632   13136 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.7587336s)
	I0421 18:27:21.214632   13136 start.go:946] {"host.minikube.internal": 172.27.192.1} host record injected into CoreDNS's ConfigMap
	I0421 18:27:21.216634   13136 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.4859487s)
	I0421 18:27:21.222890   13136 node_ready.go:35] waiting up to 6m0s for node "addons-519700" to be "Ready" ...
	I0421 18:27:21.483721   13136 node_ready.go:49] node "addons-519700" has status "Ready":"True"
	I0421 18:27:21.483721   13136 node_ready.go:38] duration metric: took 260.7343ms for node "addons-519700" to be "Ready" ...
	I0421 18:27:21.483721   13136 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:27:21.705356   13136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4sf2z" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:21.948489   13136 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-519700" context rescaled to 1 replicas
	I0421 18:27:23.775765   13136 pod_ready.go:102] pod "coredns-7db6d8ff4d-4sf2z" in "kube-system" namespace has status "Ready":"False"
	I0421 18:27:24.833337   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:24.833337   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:24.837850   13136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 18:27:24.841127   13136 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:27:24.841127   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 18:27:24.841735   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:24.847822   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:24.847822   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:24.850806   13136 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0421 18:27:24.858813   13136 out.go:177]   - Using image docker.io/registry:2.8.3
	I0421 18:27:24.862800   13136 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0421 18:27:24.862800   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0421 18:27:24.863806   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:24.973809   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:24.973809   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:24.979041   13136 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0421 18:27:25.004119   13136 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0421 18:27:25.004119   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0421 18:27:25.004119   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.023045   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.023045   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.029468   13136 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0421 18:27:25.038153   13136 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0421 18:27:25.038153   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0421 18:27:25.038153   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.142423   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.142423   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.145410   13136 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0421 18:27:25.150584   13136 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0421 18:27:25.150584   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0421 18:27:25.150584   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.151599   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.151599   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.154596   13136 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-519700"
	I0421 18:27:25.154596   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:25.146412   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.155596   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.164570   13136 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0421 18:27:25.155596   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.146412   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.147414   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.146412   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.172560   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.187994   13136 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0421 18:27:25.174200   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.174200   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.174200   13136 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0421 18:27:25.191551   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0421 18:27:25.191551   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.222366   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0421 18:27:25.209362   13136 addons.go:234] Setting addon default-storageclass=true in "addons-519700"
	I0421 18:27:25.215377   13136 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0421 18:27:25.227355   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0421 18:27:25.227355   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.236370   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0421 18:27:25.242374   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0421 18:27:25.238557   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:25.247274   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.262283   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0421 18:27:25.268761   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0421 18:27:25.274487   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0421 18:27:25.277487   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0421 18:27:25.280503   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0421 18:27:25.284529   13136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0421 18:27:25.284529   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0421 18:27:25.284529   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.325486   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.325486   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.325486   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:25.495687   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.495687   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.499539   13136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:27:25.502541   13136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:27:25.504791   13136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0421 18:27:25.508564   13136 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0421 18:27:25.508564   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0421 18:27:25.508564   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.618031   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.618031   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.627030   13136 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0421 18:27:25.627030   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:25.633021   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:25.640124   13136 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0421 18:27:25.644047   13136 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0421 18:27:25.648022   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0421 18:27:25.648022   13136 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0421 18:27:25.648022   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0421 18:27:25.648022   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.648022   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:25.959300   13136 pod_ready.go:102] pod "coredns-7db6d8ff4d-4sf2z" in "kube-system" namespace has status "Ready":"False"
	I0421 18:27:28.340414   13136 pod_ready.go:102] pod "coredns-7db6d8ff4d-4sf2z" in "kube-system" namespace has status "Ready":"False"
	I0421 18:27:29.392704   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:29.392704   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:29.398708   13136 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0421 18:27:29.400775   13136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0421 18:27:29.400775   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0421 18:27:29.401711   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:30.398644   13136 pod_ready.go:102] pod "coredns-7db6d8ff4d-4sf2z" in "kube-system" namespace has status "Ready":"False"
	I0421 18:27:30.606962   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:30.606962   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:30.606962   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:30.747349   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:30.747349   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:30.748352   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:30.751431   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:30.751431   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:30.751431   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:30.753435   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:30.753435   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:30.753435   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:30.809077   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:30.809077   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:30.809077   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:31.105674   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:31.105674   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:31.105674   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:31.264932   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:31.264932   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:31.264932   13136 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 18:27:31.264932   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 18:27:31.264932   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:31.334455   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:31.334455   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:31.335455   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:31.368090   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:31.368090   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:31.392077   13136 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0421 18:27:31.427228   13136 out.go:177]   - Using image docker.io/busybox:stable
	I0421 18:27:31.450592   13136 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0421 18:27:31.451591   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0421 18:27:31.451591   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:31.442581   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:31.471925   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:31.471925   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:32.297856   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:32.297984   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:32.298248   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:33.059776   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:33.059776   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:33.059776   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:33.188988   13136 pod_ready.go:102] pod "coredns-7db6d8ff4d-4sf2z" in "kube-system" namespace has status "Ready":"False"
	I0421 18:27:33.355419   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:33.364586   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:33.364586   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:33.399031   13136 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0421 18:27:33.399031   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:33.768030   13136 pod_ready.go:92] pod "coredns-7db6d8ff4d-4sf2z" in "kube-system" namespace has status "Ready":"True"
	I0421 18:27:33.768030   13136 pod_ready.go:81] duration metric: took 12.06259s for pod "coredns-7db6d8ff4d-4sf2z" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:33.768030   13136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b5bnd" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:34.121539   13136 pod_ready.go:92] pod "coredns-7db6d8ff4d-b5bnd" in "kube-system" namespace has status "Ready":"True"
	I0421 18:27:34.121539   13136 pod_ready.go:81] duration metric: took 353.5065ms for pod "coredns-7db6d8ff4d-b5bnd" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:34.121539   13136 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-519700" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:34.483548   13136 pod_ready.go:92] pod "etcd-addons-519700" in "kube-system" namespace has status "Ready":"True"
	I0421 18:27:34.483548   13136 pod_ready.go:81] duration metric: took 362.0061ms for pod "etcd-addons-519700" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:34.483548   13136 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-519700" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:34.899567   13136 pod_ready.go:92] pod "kube-apiserver-addons-519700" in "kube-system" namespace has status "Ready":"True"
	I0421 18:27:34.899567   13136 pod_ready.go:81] duration metric: took 416.0161ms for pod "kube-apiserver-addons-519700" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:34.899567   13136 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-519700" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:34.980402   13136 pod_ready.go:92] pod "kube-controller-manager-addons-519700" in "kube-system" namespace has status "Ready":"True"
	I0421 18:27:34.980402   13136 pod_ready.go:81] duration metric: took 80.8347ms for pod "kube-controller-manager-addons-519700" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:34.980402   13136 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9cznh" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:35.021719   13136 pod_ready.go:92] pod "kube-proxy-9cznh" in "kube-system" namespace has status "Ready":"True"
	I0421 18:27:35.021719   13136 pod_ready.go:81] duration metric: took 41.3161ms for pod "kube-proxy-9cznh" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:35.021719   13136 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-519700" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:35.117585   13136 pod_ready.go:92] pod "kube-scheduler-addons-519700" in "kube-system" namespace has status "Ready":"True"
	I0421 18:27:35.117585   13136 pod_ready.go:81] duration metric: took 95.8656ms for pod "kube-scheduler-addons-519700" in "kube-system" namespace to be "Ready" ...
	I0421 18:27:35.117585   13136 pod_ready.go:38] duration metric: took 13.6337682s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:27:35.117585   13136 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:27:35.139599   13136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:27:35.513793   13136 api_server.go:72] duration metric: took 16.9626083s to wait for apiserver process to appear ...
	I0421 18:27:35.513793   13136 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:27:35.513793   13136 api_server.go:253] Checking apiserver healthz at https://172.27.202.1:8443/healthz ...
	I0421 18:27:35.644598   13136 api_server.go:279] https://172.27.202.1:8443/healthz returned 200:
	ok
	I0421 18:27:35.689607   13136 api_server.go:141] control plane version: v1.30.0
	I0421 18:27:35.689607   13136 api_server.go:131] duration metric: took 175.8128ms to wait for apiserver health ...
	I0421 18:27:35.689607   13136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:27:35.767274   13136 system_pods.go:59] 7 kube-system pods found
	I0421 18:27:35.767274   13136 system_pods.go:61] "coredns-7db6d8ff4d-4sf2z" [515d8a1c-75e4-4443-9583-307ea29686b1] Running
	I0421 18:27:35.767274   13136 system_pods.go:61] "coredns-7db6d8ff4d-b5bnd" [1f4bcf04-5666-4e81-b836-b676d12f3e76] Running
	I0421 18:27:35.767274   13136 system_pods.go:61] "etcd-addons-519700" [e7d8c08e-a8e3-4042-b0b1-cfcd404b14b6] Running
	I0421 18:27:35.767274   13136 system_pods.go:61] "kube-apiserver-addons-519700" [ba1dcbde-7224-47e5-98e5-479cf37d60bb] Running
	I0421 18:27:35.767274   13136 system_pods.go:61] "kube-controller-manager-addons-519700" [6c101dd4-6dac-48e7-8895-63617cb6ef50] Running
	I0421 18:27:35.767274   13136 system_pods.go:61] "kube-proxy-9cznh" [49fc321c-0d43-41d7-99c3-b3d266333ab2] Running
	I0421 18:27:35.767274   13136 system_pods.go:61] "kube-scheduler-addons-519700" [18dc3774-d99c-4f6b-9543-759f70548e21] Running
	I0421 18:27:35.767274   13136 system_pods.go:74] duration metric: took 77.6659ms to wait for pod list to return data ...
	I0421 18:27:35.767824   13136 default_sa.go:34] waiting for default service account to be created ...
	I0421 18:27:35.807817   13136 default_sa.go:45] found service account: "default"
	I0421 18:27:35.807817   13136 default_sa.go:55] duration metric: took 39.9924ms for default service account to be created ...
	I0421 18:27:35.807817   13136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 18:27:35.834504   13136 system_pods.go:86] 7 kube-system pods found
	I0421 18:27:35.835502   13136 system_pods.go:89] "coredns-7db6d8ff4d-4sf2z" [515d8a1c-75e4-4443-9583-307ea29686b1] Running
	I0421 18:27:35.835502   13136 system_pods.go:89] "coredns-7db6d8ff4d-b5bnd" [1f4bcf04-5666-4e81-b836-b676d12f3e76] Running
	I0421 18:27:35.835502   13136 system_pods.go:89] "etcd-addons-519700" [e7d8c08e-a8e3-4042-b0b1-cfcd404b14b6] Running
	I0421 18:27:35.835502   13136 system_pods.go:89] "kube-apiserver-addons-519700" [ba1dcbde-7224-47e5-98e5-479cf37d60bb] Running
	I0421 18:27:35.835502   13136 system_pods.go:89] "kube-controller-manager-addons-519700" [6c101dd4-6dac-48e7-8895-63617cb6ef50] Running
	I0421 18:27:35.835502   13136 system_pods.go:89] "kube-proxy-9cznh" [49fc321c-0d43-41d7-99c3-b3d266333ab2] Running
	I0421 18:27:35.835502   13136 system_pods.go:89] "kube-scheduler-addons-519700" [18dc3774-d99c-4f6b-9543-759f70548e21] Running
	I0421 18:27:35.835502   13136 system_pods.go:126] duration metric: took 26.9936ms to wait for k8s-apps to be running ...
	I0421 18:27:35.835502   13136 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 18:27:35.885388   13136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:27:36.002760   13136 system_svc.go:56] duration metric: took 166.8048ms WaitForService to wait for kubelet
	I0421 18:27:36.002760   13136 kubeadm.go:576] duration metric: took 17.4515713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:27:36.003385   13136 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:27:36.032452   13136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:27:36.032452   13136 node_conditions.go:123] node cpu capacity is 2
	I0421 18:27:36.033440   13136 node_conditions.go:105] duration metric: took 30.0549ms to run NodePressure ...
	I0421 18:27:36.033440   13136 start.go:240] waiting for startup goroutines ...
	I0421 18:27:36.515255   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:36.515255   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:36.515255   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:37.072931   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:37.073028   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:37.073028   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:37.162447   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:37.162447   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:37.162447   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:37.724509   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:37.725513   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:37.725513   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:37.770391   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:37.770391   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:37.771396   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:37.829628   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:37.829628   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:37.830522   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:37.906205   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:37.906205   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:37.911481   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:38.110117   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:38.110117   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:38.110958   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:38.280506   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:38.280506   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:38.281428   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:38.376304   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0421 18:27:38.429491   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:38.429491   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:38.430404   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:38.540310   13136 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0421 18:27:38.540310   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0421 18:27:38.543745   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:38.543745   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:38.544750   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:38.600798   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0421 18:27:38.607820   13136 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0421 18:27:38.607820   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0421 18:27:38.702553   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:38.702553   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:38.702553   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:38.916263   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:27:38.926734   13136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0421 18:27:38.927283   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0421 18:27:38.990380   13136 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0421 18:27:38.990380   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0421 18:27:39.064594   13136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0421 18:27:39.065280   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0421 18:27:39.248017   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:39.248086   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:39.248683   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:39.254409   13136 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0421 18:27:39.254409   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0421 18:27:39.288277   13136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0421 18:27:39.288592   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0421 18:27:39.299699   13136 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0421 18:27:39.299699   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0421 18:27:39.360380   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:39.360380   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:39.365150   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:39.388891   13136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0421 18:27:39.388891   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0421 18:27:39.396894   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0421 18:27:39.444899   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:39.444899   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:39.445898   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:39.481474   13136 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0421 18:27:39.481474   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0421 18:27:39.513296   13136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0421 18:27:39.513396   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0421 18:27:39.521605   13136 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0421 18:27:39.521605   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0421 18:27:39.698745   13136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 18:27:39.698745   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0421 18:27:39.759444   13136 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0421 18:27:39.759444   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0421 18:27:39.772044   13136 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0421 18:27:39.772200   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0421 18:27:39.794681   13136 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0421 18:27:39.794773   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0421 18:27:39.868209   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0421 18:27:39.879209   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0421 18:27:39.937739   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0421 18:27:39.945505   13136 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0421 18:27:39.947910   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0421 18:27:40.032654   13136 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0421 18:27:40.032654   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0421 18:27:40.117370   13136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0421 18:27:40.117370   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0421 18:27:40.184550   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.8081308s)
	I0421 18:27:40.299038   13136 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0421 18:27:40.299038   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0421 18:27:40.307064   13136 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0421 18:27:40.307189   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0421 18:27:40.410477   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0421 18:27:40.497198   13136 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0421 18:27:40.497198   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0421 18:27:40.497198   13136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0421 18:27:40.497198   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0421 18:27:40.540216   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:40.540216   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:40.541567   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:40.555193   13136 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0421 18:27:40.555193   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0421 18:27:40.768389   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:40.768389   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:40.770104   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:40.825182   13136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0421 18:27:40.825182   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0421 18:27:40.869116   13136 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0421 18:27:40.869208   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0421 18:27:40.901440   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:40.901675   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:40.902407   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:41.065354   13136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0421 18:27:41.065354   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0421 18:27:41.095891   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0421 18:27:41.307424   13136 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0421 18:27:41.307424   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0421 18:27:41.467400   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0421 18:27:41.512398   13136 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0421 18:27:41.512398   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0421 18:27:41.679622   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 18:27:41.680602   13136 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0421 18:27:41.680602   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0421 18:27:41.779217   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:41.779217   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:41.780662   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:41.788253   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0421 18:27:41.903638   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0421 18:27:42.022319   13136 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0421 18:27:42.022319   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0421 18:27:42.121393   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.5205707s)
	I0421 18:27:42.486329   13136 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0421 18:27:42.531840   13136 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0421 18:27:42.531915   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0421 18:27:43.319554   13136 addons.go:234] Setting addon gcp-auth=true in "addons-519700"
	I0421 18:27:43.319845   13136 host.go:66] Checking if "addons-519700" exists ...
	I0421 18:27:43.320550   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:43.416126   13136 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:27:43.416211   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0421 18:27:44.544544   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:27:44.615418   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.2184881s)
	I0421 18:27:44.615579   13136 addons.go:470] Verifying addon registry=true in "addons-519700"
	I0421 18:27:44.621055   13136 out.go:177] * Verifying registry addon...
	I0421 18:27:44.615827   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.6995234s)
	I0421 18:27:44.625837   13136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0421 18:27:44.632757   13136 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0421 18:27:44.632757   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:45.134779   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:45.644045   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:45.761309   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:45.761309   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:45.776596   13136 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0421 18:27:45.776596   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-519700 ).state
	I0421 18:27:46.185103   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:46.883538   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:47.161254   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:47.846330   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:48.185585   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:48.206433   13136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:27:48.206487   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:48.206555   13136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-519700 ).networkadapters[0]).ipaddresses[0]
	I0421 18:27:48.687611   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:49.238713   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:49.755163   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:50.167679   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:50.685977   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:51.141486   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:51.208508   13136 main.go:141] libmachine: [stdout =====>] : 172.27.202.1
	
	I0421 18:27:51.208508   13136 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:27:51.209136   13136 sshutil.go:53] new ssh client: &{IP:172.27.202.1 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-519700\id_rsa Username:docker}
	I0421 18:27:51.641954   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:52.133634   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:52.638730   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:53.317048   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:53.621933   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.7535639s)
	I0421 18:27:53.621992   13136 addons.go:470] Verifying addon ingress=true in "addons-519700"
	I0421 18:27:53.624915   13136 out.go:177] * Verifying ingress addon...
	I0421 18:27:53.622171   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.742812s)
	I0421 18:27:53.622227   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.6843923s)
	I0421 18:27:53.622302   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.2116575s)
	I0421 18:27:53.622302   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (12.5263233s)
	I0421 18:27:53.622557   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.1540028s)
	I0421 18:27:53.622557   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.942851s)
	I0421 18:27:53.622624   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.8342882s)
	I0421 18:27:53.624976   13136 addons.go:470] Verifying addon metrics-server=true in "addons-519700"
	I0421 18:27:53.631847   13136 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-519700 service yakd-dashboard -n yakd-dashboard
	
	I0421 18:27:53.631847   13136 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0421 18:27:53.741001   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:53.759371   13136 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0421 18:27:53.759455   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0421 18:27:53.777414   13136 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0421 18:27:54.173424   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:54.178802   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:54.688238   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:54.688238   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:55.156920   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:55.157406   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:55.692568   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:55.699461   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:55.787416   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (13.883596s)
	I0421 18:27:55.787416   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.2417918s)
	I0421 18:27:55.787416   13136 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-519700"
	W0421 18:27:55.787416   13136 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0421 18:27:55.787416   13136 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.0107492s)
	I0421 18:27:55.787603   13136 retry.go:31] will retry after 312.607388ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0421 18:27:55.796425   13136 out.go:177] * Verifying csi-hostpath-driver addon...
	I0421 18:27:55.800039   13136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0421 18:27:55.803032   13136 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0421 18:27:55.804047   13136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0421 18:27:55.806048   13136 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0421 18:27:55.806048   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0421 18:27:55.897476   13136 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0421 18:27:55.897591   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0421 18:27:55.907378   13136 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0421 18:27:55.907378   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:56.001658   13136 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0421 18:27:56.001727   13136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0421 18:27:56.068380   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0421 18:27:56.119152   13136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0421 18:27:56.141862   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:56.147512   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:56.317832   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:56.650681   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:56.653207   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:56.820364   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:57.141402   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:57.146477   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:57.316590   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:57.665219   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:57.665374   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:57.822937   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:58.186276   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:58.204486   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:58.365542   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:58.435003   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.3665229s)
	I0421 18:27:58.442686   13136 addons.go:470] Verifying addon gcp-auth=true in "addons-519700"
	I0421 18:27:58.448266   13136 out.go:177] * Verifying gcp-auth addon...
	I0421 18:27:58.454279   13136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0421 18:27:58.466336   13136 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0421 18:27:58.466426   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:27:58.638562   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:58.665509   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:58.827150   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:58.965448   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:27:59.056561   13136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.9373882s)
	I0421 18:27:59.139346   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:59.144788   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:59.316644   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:59.472629   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:27:59.633869   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:27:59.647675   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:27:59.825114   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:27:59.962200   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:00.139125   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:00.143830   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:00.315206   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:00.472596   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:00.649654   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:00.650084   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:00.822316   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:00.961138   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:01.140628   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:01.146636   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:01.331237   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:01.467459   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:01.644452   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:01.645398   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:01.820123   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:01.959417   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:02.150550   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:02.151232   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:02.323684   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:02.462180   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:02.639079   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:02.643979   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:02.828543   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:02.967191   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:03.142400   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:03.145370   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:03.319302   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:03.474136   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:03.641562   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:03.648876   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:03.828035   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:03.965807   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:04.141824   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:04.143005   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:04.318194   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:04.474222   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:04.634278   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:04.648663   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:04.826930   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:04.965447   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:05.143278   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:05.144354   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:05.318510   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:05.473886   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:05.634541   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:05.648556   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:05.835733   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:05.962921   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:06.139133   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:06.153133   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:06.328649   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:06.467487   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:06.637636   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:06.640411   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:06.827068   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:06.969262   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:07.147849   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:07.163128   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:07.318846   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:07.472015   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:07.650752   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:07.651104   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:07.824600   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:08.214335   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:08.217211   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:08.219876   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:08.551488   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:08.556139   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:08.641707   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:08.647030   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:09.072247   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:09.081101   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:09.207219   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:09.207767   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:09.324647   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:09.464922   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:09.640685   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:09.644886   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:09.828911   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:09.966644   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:10.144344   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:10.147905   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:10.317144   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:10.470543   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:10.647713   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:10.648317   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:10.817803   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:10.972441   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:11.136437   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:11.140680   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:11.324517   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:11.463711   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:11.852119   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:11.852408   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:11.856260   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:11.964636   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:12.137590   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:12.143691   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:12.556187   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:12.560100   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:12.637968   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:12.643239   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:13.378770   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:13.384478   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:13.384557   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:13.387390   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:13.395021   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:13.460214   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:13.641459   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:13.645874   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:13.871754   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:14.126166   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:14.149743   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:14.157704   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:14.317002   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:14.480306   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:14.646087   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:14.648093   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:14.928002   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:14.958563   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:15.138291   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:15.141871   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:15.325724   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:15.463303   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:15.641270   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:15.646787   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:15.829493   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:15.969268   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:16.144424   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:16.144424   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:16.320180   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:16.474164   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:16.649878   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:16.649878   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:16.824534   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:16.962339   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:17.138796   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:17.142395   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:17.314100   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:17.471404   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:17.649417   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:17.649609   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:17.821116   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:17.961571   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:18.136141   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:18.139139   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:18.326775   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:18.462490   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:18.640250   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:18.645099   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:18.814414   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:18.971097   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:19.147108   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:19.150603   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:19.319826   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:19.474013   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:19.636027   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:19.641525   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:19.827649   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:19.966432   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:20.138314   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:20.143480   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:20.330651   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:20.469452   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:20.648520   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:20.648747   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:20.821329   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:20.977110   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:21.137464   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:21.143724   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:21.331152   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:21.485330   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:21.644347   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:21.644562   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:21.815837   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:21.968650   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:22.145977   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:22.146983   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:22.319096   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:22.473791   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:22.634648   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:22.665080   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:22.822766   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:22.961887   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:23.138463   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:23.142190   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:23.325983   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:23.466393   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:23.641751   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:23.649194   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:23.814840   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:23.969508   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:24.144611   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:24.146491   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:24.322507   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:24.476225   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:24.635541   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:24.641875   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:24.825326   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:24.962393   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:25.138369   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:25.143776   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:25.328746   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:25.466762   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:25.645077   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:25.645077   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:25.819294   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:25.976064   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:26.137903   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:26.140988   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:26.327752   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:26.465730   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:26.643030   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:26.644880   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:26.819583   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:26.963219   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:27.136966   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:27.141785   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:27.325799   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:27.464900   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:27.642015   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:27.642577   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:27.819068   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:27.973536   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:28.147688   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:28.148472   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:28.323749   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:28.463904   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:28.639392   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:28.643669   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:28.831188   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:28.971452   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:29.150166   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:29.152087   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:29.323403   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:29.463000   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:29.641150   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:29.645924   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:29.830133   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:29.969687   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:30.144951   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:30.144951   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:30.320349   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:30.460482   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:30.636261   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:30.642722   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:30.829262   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:30.967944   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:31.143440   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:31.144078   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:31.317297   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:31.471678   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:31.647308   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:31.649202   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:31.823618   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:31.965053   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:32.145397   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:32.145840   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:32.317261   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:32.471998   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:32.648527   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:32.650224   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:32.830051   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:32.960967   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:33.138591   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:33.142132   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:33.333114   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:33.467105   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:33.655087   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:33.656055   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:33.819156   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:33.973146   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:34.147176   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:34.147735   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:34.460667   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:34.465121   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:34.648468   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:34.653062   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:35.007330   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:35.008496   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:35.806057   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:35.809016   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:35.811789   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:35.813768   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:35.816790   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:35.818435   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:35.825276   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:35.975173   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:36.135292   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:36.140793   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:36.323333   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:36.462052   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:36.700012   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:36.704625   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:36.831095   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:36.973715   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:37.146365   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:37.146365   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:37.318445   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:37.472940   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:37.634382   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:37.648957   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:37.825147   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:37.963710   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:38.139958   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:38.145459   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:38.317010   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:38.611333   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:38.641139   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:38.646749   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:38.815239   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:38.969330   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:39.146574   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:39.147112   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:39.582826   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:39.584832   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:39.642190   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:39.646195   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:39.853859   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:39.972354   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:40.146812   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:40.147813   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:40.316780   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:40.471895   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:40.647181   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:40.648189   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:40.821220   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:40.962430   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:41.141480   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:41.152075   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:41.329153   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:41.466844   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:41.645031   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:41.646034   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:41.818475   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:41.961478   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:42.136534   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:42.140519   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:42.324683   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:42.465151   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:42.643381   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:42.644266   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:42.817765   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:42.973939   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:43.148222   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:43.149231   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:43.325961   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:43.463494   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:43.641217   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:43.644886   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:43.816101   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:43.972692   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:44.147733   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:44.150826   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:44.325777   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:44.474587   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:44.636279   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:44.640332   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:44.825870   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:44.963074   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:45.142415   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:45.148618   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:45.334048   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:45.470191   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:45.646180   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:45.651092   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:45.820360   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:45.973862   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:46.135800   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:46.141051   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:46.327106   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:46.468039   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:46.646872   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:46.646975   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:46.821475   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:46.961273   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:47.137649   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:47.142210   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:47.327809   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:47.471156   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:47.645753   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:47.645991   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:47.821750   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:47.962708   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:48.136785   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:48.142379   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:48.330071   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:48.468510   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:48.646409   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:48.647395   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:48.818970   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:48.975494   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:49.138972   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:49.145498   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:49.329605   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:49.470814   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:49.644632   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:49.645870   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:49.820430   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:49.973537   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:50.147406   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:50.148650   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:50.321479   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:50.480753   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:50.913778   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:50.914774   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:50.918110   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:50.964818   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:51.133886   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:51.148912   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:52.507243   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:52.507243   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:52.512325   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:52.517086   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:52.521944   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:52.522595   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:52.527215   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:52.532184   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:52.650192   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:52.650971   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:52.821568   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:52.976797   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:53.147403   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0421 18:28:53.148514   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:53.319480   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:53.474723   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:53.646830   13136 kapi.go:107] duration metric: took 1m9.0204399s to wait for kubernetes.io/minikube-addons=registry ...
	I0421 18:28:53.647044   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:53.819547   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:53.972514   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:54.144904   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:54.326973   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:54.475638   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:54.653886   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:54.827462   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:54.966493   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:55.147552   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:55.331420   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:55.475161   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:55.652613   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:55.833996   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:55.971609   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:56.149090   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:56.321001   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:56.474180   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:56.651248   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:56.828103   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:56.974431   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:57.145692   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:57.322135   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:57.474027   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:57.651607   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:57.824767   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:57.962698   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:58.155645   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:58.315129   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:58.468505   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:58.645884   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:58.822272   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:58.961462   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:59.155216   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:59.330949   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:59.468664   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:28:59.643921   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:28:59.822249   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:28:59.960508   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:00.153268   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:00.328918   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:00.467048   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:00.642643   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:00.822521   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:00.963716   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:01.156084   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:01.331828   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:01.963727   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:01.964031   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:01.964524   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:01.969326   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:02.410380   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:02.410380   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:02.547768   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:02.648378   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:02.819977   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:02.961558   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:03.150936   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:03.326058   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:03.466447   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:03.658588   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:03.817369   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:03.969465   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:04.145822   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:04.321771   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:04.474965   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:04.651609   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:04.828914   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:04.967546   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:05.144545   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:05.319183   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:05.475305   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:05.650457   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:05.832290   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:05.966851   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:06.157453   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:06.317230   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:06.474212   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:06.650952   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:06.828489   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:06.967485   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:07.142948   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:07.319962   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:07.461174   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:07.651926   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:07.828508   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:07.974348   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:08.502469   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:08.503737   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:08.507783   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:08.643259   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:08.820708   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:08.975802   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:09.150902   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:09.361906   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:09.483447   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:09.647244   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:09.832272   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:09.980367   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:10.149903   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:10.324162   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:10.462242   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:10.655087   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:10.815722   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:10.971599   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:11.147751   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:11.322201   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:11.461009   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:11.651181   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:11.827166   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:11.966648   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:12.143147   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:12.319519   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:12.473648   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:12.649888   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:12.830510   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:12.961731   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:13.809117   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:13.809835   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:13.809835   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:14.061899   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:14.061899   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:14.065876   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:14.147292   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:14.324353   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:14.465284   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:14.655063   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:14.830071   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:15.003022   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:15.142031   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:15.327665   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:15.478537   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:15.683296   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:15.849939   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:15.962431   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:16.154017   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:16.317687   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:16.471571   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:16.650223   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:16.821524   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:16.961716   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:17.154667   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:17.317318   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:17.470977   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:17.645012   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:17.818609   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:17.961727   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:18.172319   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:18.328884   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:18.467630   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:18.643058   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:18.816599   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:18.972550   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:19.149515   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:19.326995   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:19.467771   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:19.642694   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:19.817919   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:19.974317   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:20.147373   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:20.322483   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:20.462562   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:20.653243   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:20.832092   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:20.970378   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:21.147541   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:21.323700   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:21.464726   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:21.657900   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:21.819909   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:21.974279   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:22.150626   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:22.325974   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:22.461878   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:22.656381   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:22.828440   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:22.963864   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:23.155028   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:23.330409   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:23.559264   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:24.011014   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:24.012707   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:24.012707   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:24.498169   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:24.505272   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:24.505496   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:24.661440   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:24.831990   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:24.964622   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:25.155399   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:25.319296   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:25.475016   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:25.658799   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:25.831298   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:26.005162   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:26.159299   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:26.335396   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:26.495846   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:26.655106   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:26.826557   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:26.965536   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:27.155940   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:27.322399   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:27.473812   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:27.649597   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:27.824871   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:27.963627   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:28.156381   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:28.316162   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:28.471406   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:28.648043   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:28.827687   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:28.965509   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:29.155687   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:29.317201   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:29.471338   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:29.643678   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:29.820745   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:29.964119   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:30.151554   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:30.327142   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:30.466481   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:30.657833   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:30.951912   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:30.967476   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:31.155638   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:31.328352   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:31.466611   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:31.655065   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:31.818209   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:31.973210   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:32.148061   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:32.323728   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:32.464395   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:32.655959   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:32.829152   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:32.968076   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:33.145194   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:33.321488   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:33.468972   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:33.651812   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:33.829453   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:33.968976   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:34.145868   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:34.320815   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:34.460889   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:34.652941   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:34.826314   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:34.963214   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:35.301266   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:35.331923   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:35.464955   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:35.851429   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:35.851629   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:36.254084   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:36.254200   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:36.332244   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:36.475402   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:36.642531   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:36.817197   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:36.969738   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:37.149735   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:37.320181   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:37.480401   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:37.653764   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:37.833024   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:37.966626   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:38.158410   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:38.315306   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:38.750217   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:38.757865   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:38.823865   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:38.980105   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:39.154984   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:39.331269   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:39.472394   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:39.655774   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:39.817216   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:39.974890   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:40.150210   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:40.322992   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:40.465368   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:40.655701   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:40.816482   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:40.969207   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:41.144899   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:41.319940   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:41.474710   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:41.649207   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:41.824620   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:41.962720   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:42.154921   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:42.330749   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:42.469167   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:42.656313   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:42.818373   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:42.969310   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:43.146886   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:43.328986   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:43.496418   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:43.650739   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:43.823274   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:43.961898   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:44.149759   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:44.327386   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:44.468797   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:44.645223   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:44.820196   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:44.959819   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:45.151213   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:45.327942   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:45.464243   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:45.656343   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:45.831764   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:45.969220   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:46.145929   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:46.322358   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:46.474809   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:46.654616   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:46.825187   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:46.966023   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:47.143918   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:47.324952   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:47.474160   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:47.648905   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:47.821808   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:47.975117   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:48.152010   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:48.328443   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:48.468906   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:48.641834   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:48.819339   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:48.970819   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:49.146900   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:49.327729   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:49.474496   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:49.649739   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:49.827619   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:49.968899   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:50.157964   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:50.331528   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:50.470739   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:50.645802   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:50.823461   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:50.961362   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:51.154315   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:51.472101   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:51.473157   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:51.657951   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:51.814675   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:51.970133   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:52.145809   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:52.319071   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:52.474032   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:52.650579   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:52.824769   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:52.963394   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:53.154386   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:53.318550   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:53.472029   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:53.648784   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:53.822799   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:53.965410   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:54.145879   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:54.322220   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:54.854849   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:54.855679   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:54.855679   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:54.975629   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:55.151648   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:55.330301   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:55.467033   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:55.643129   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:55.818289   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:55.972913   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:56.158695   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:56.328720   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:56.468074   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:56.646018   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:56.823057   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:56.962842   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:57.155884   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:57.330274   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:57.468273   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:57.642831   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:57.838496   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:58.416910   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:58.418562   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:58.425305   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:58.475427   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:58.654439   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:58.829657   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:58.966661   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:59.159830   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:59.319693   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:59.473394   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:29:59.671758   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:29:59.819333   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:29:59.977645   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:00.164356   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:00.366776   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:00.465111   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:00.656839   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:00.838954   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:00.978187   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:01.192372   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:01.330004   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:01.477058   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:01.650426   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:01.827873   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:01.974851   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:02.151941   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:02.329246   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:02.467006   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:02.644124   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:02.841909   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:02.979116   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:03.143147   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:03.325154   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:03.474863   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:03.656201   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:03.827347   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:03.962476   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:04.154456   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:04.330860   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:04.473676   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:04.643974   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:04.820172   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:04.976096   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:05.151084   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:05.323791   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:05.464983   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:05.658837   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:05.824620   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:05.971302   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:06.147471   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:06.324147   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:06.470043   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:06.698419   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:06.827220   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:06.967495   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:07.160526   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:07.339997   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:07.471444   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:07.660248   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:07.821340   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:07.976555   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:08.150767   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:08.329804   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:08.470375   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:08.646710   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:08.823719   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:08.962439   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:09.151168   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:09.355674   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:09.467550   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:09.644714   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:09.821298   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:09.960112   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:10.151763   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:10.326516   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:10.466830   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:10.674231   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:10.840416   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:10.974918   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:11.157904   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:11.323164   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:11.460938   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:11.653823   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:11.843109   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:11.968053   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:12.143449   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:12.317524   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:12.475266   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:12.650703   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:12.856942   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:12.968046   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:13.160549   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:13.330325   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:13.511715   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:13.655575   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:13.818958   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:13.970088   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:14.148005   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:14.327939   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:14.466400   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:14.645338   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:14.827499   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:14.977923   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:15.152599   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:15.328281   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:15.468294   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:15.643060   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:15.816674   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:15.970254   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:16.147177   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:16.321912   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:16.460847   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:16.654187   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:16.832674   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:16.967695   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:17.145140   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:17.328182   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:17.474437   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:17.654312   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:17.830019   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:17.964303   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:18.158896   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:18.319748   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:18.473335   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:18.662769   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:18.836499   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:18.973106   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:19.155949   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:19.317094   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:19.471245   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:19.656379   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:19.824607   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:19.963823   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:20.152399   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:20.332332   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:20.469678   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:20.647522   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:20.821606   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:20.976277   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:21.151445   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:21.329614   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:21.468109   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:21.645041   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:21.952302   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:21.974735   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:22.151490   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:22.329544   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:22.468517   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:22.644580   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:22.822843   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:22.966429   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:23.159374   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:23.317770   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:23.472606   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:23.648306   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:23.820639   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:23.963339   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:24.153399   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:24.315520   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:24.472737   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:24.647733   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:24.844257   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:24.964948   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:25.157246   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:25.317104   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:25.471674   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:25.646687   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:25.822205   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:25.962165   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:26.211569   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:26.329780   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:26.467418   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:26.645025   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:26.819293   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:26.975522   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:27.149077   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:27.608901   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:27.609815   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:28.181486   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:28.181927   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:28.182794   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:28.505325   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:28.509283   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:28.516596   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:28.652291   13136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0421 18:30:28.832702   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:28.971501   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:29.145239   13136 kapi.go:107] duration metric: took 2m35.5123038s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0421 18:30:29.322910   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:29.473459   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:29.821923   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:29.963553   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:30.316233   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:30.474227   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:30.822814   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:30.962404   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:31.328020   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:31.465415   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:31.958309   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:31.966391   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:32.328609   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:32.466238   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:32.818174   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0421 18:30:33.013665   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:33.405487   13136 kapi.go:107] duration metric: took 2m37.6003364s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0421 18:30:33.487627   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:33.964273   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:34.473771   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:35.117829   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:35.469414   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:35.963347   13136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0421 18:30:36.475937   13136 kapi.go:107] duration metric: took 2m38.0205516s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0421 18:30:36.478860   13136 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-519700 cluster.
	I0421 18:30:36.481894   13136 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0421 18:30:36.484862   13136 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0421 18:30:36.487858   13136 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0421 18:30:36.493042   13136 addons.go:505] duration metric: took 3m17.9397194s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner metrics-server helm-tiller inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0421 18:30:36.493042   13136 start.go:245] waiting for cluster config update ...
	I0421 18:30:36.493042   13136 start.go:254] writing updated cluster config ...
	I0421 18:30:36.505858   13136 ssh_runner.go:195] Run: rm -f paused
	I0421 18:30:36.804262   13136 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 18:30:36.807836   13136 out.go:177] * Done! kubectl is now configured to use "addons-519700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 21 18:31:22 addons-519700 dockerd[1337]: time="2024-04-21T18:31:22.670817325Z" level=warning msg="cleaning up after shim disconnected" id=4a92c89e58bbce6e43be741fdd049662d5366103c2a7604bf89a055689303fbc namespace=moby
	Apr 21 18:31:22 addons-519700 dockerd[1337]: time="2024-04-21T18:31:22.671121024Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 18:31:22 addons-519700 cri-dockerd[1239]: time="2024-04-21T18:31:22Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-lm7j7_kube-system\": unexpected command output nsenter: cannot open /proc/3640/ns/net: No such file or directory\n with error: exit status 1"
	Apr 21 18:31:22 addons-519700 dockerd[1331]: time="2024-04-21T18:31:22.980797215Z" level=info msg="ignoring event" container=0542a2c6cb2972fafdb52fb36e4cb93c04ffacd2deda2443b6ad70119dbc7776 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 18:31:22 addons-519700 dockerd[1337]: time="2024-04-21T18:31:22.982167211Z" level=info msg="shim disconnected" id=0542a2c6cb2972fafdb52fb36e4cb93c04ffacd2deda2443b6ad70119dbc7776 namespace=moby
	Apr 21 18:31:22 addons-519700 dockerd[1337]: time="2024-04-21T18:31:22.982275911Z" level=warning msg="cleaning up after shim disconnected" id=0542a2c6cb2972fafdb52fb36e4cb93c04ffacd2deda2443b6ad70119dbc7776 namespace=moby
	Apr 21 18:31:22 addons-519700 dockerd[1337]: time="2024-04-21T18:31:22.982291510Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 18:31:26 addons-519700 dockerd[1331]: time="2024-04-21T18:31:26.410451735Z" level=info msg="ignoring event" container=7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 18:31:26 addons-519700 dockerd[1337]: time="2024-04-21T18:31:26.412630428Z" level=info msg="shim disconnected" id=7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5 namespace=moby
	Apr 21 18:31:26 addons-519700 dockerd[1337]: time="2024-04-21T18:31:26.413281326Z" level=warning msg="cleaning up after shim disconnected" id=7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5 namespace=moby
	Apr 21 18:31:26 addons-519700 dockerd[1337]: time="2024-04-21T18:31:26.413326626Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 18:31:26 addons-519700 dockerd[1337]: time="2024-04-21T18:31:26.440117040Z" level=warning msg="cleanup warnings time=\"2024-04-21T18:31:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 21 18:31:26 addons-519700 dockerd[1331]: time="2024-04-21T18:31:26.651441965Z" level=info msg="ignoring event" container=23a7d086bc2a4ac10db70d5768b69e319eb7312ce09dec7457312bc752a09baf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 18:31:26 addons-519700 dockerd[1337]: time="2024-04-21T18:31:26.652944561Z" level=info msg="shim disconnected" id=23a7d086bc2a4ac10db70d5768b69e319eb7312ce09dec7457312bc752a09baf namespace=moby
	Apr 21 18:31:26 addons-519700 dockerd[1337]: time="2024-04-21T18:31:26.653022960Z" level=warning msg="cleaning up after shim disconnected" id=23a7d086bc2a4ac10db70d5768b69e319eb7312ce09dec7457312bc752a09baf namespace=moby
	Apr 21 18:31:26 addons-519700 dockerd[1337]: time="2024-04-21T18:31:26.653035960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 18:31:28 addons-519700 dockerd[1331]: time="2024-04-21T18:31:28.124831992Z" level=info msg="ignoring event" container=30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 18:31:28 addons-519700 dockerd[1337]: time="2024-04-21T18:31:28.129665580Z" level=info msg="shim disconnected" id=30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662 namespace=moby
	Apr 21 18:31:28 addons-519700 dockerd[1337]: time="2024-04-21T18:31:28.129941479Z" level=warning msg="cleaning up after shim disconnected" id=30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662 namespace=moby
	Apr 21 18:31:28 addons-519700 dockerd[1337]: time="2024-04-21T18:31:28.129964479Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 18:31:28 addons-519700 dockerd[1337]: time="2024-04-21T18:31:28.250818070Z" level=warning msg="cleanup warnings time=\"2024-04-21T18:31:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 21 18:31:28 addons-519700 dockerd[1331]: time="2024-04-21T18:31:28.484600173Z" level=info msg="ignoring event" container=122c64e7615112c7fc572d92f157e650c9330a05f79edec12ff288b0b2106f42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 18:31:28 addons-519700 dockerd[1337]: time="2024-04-21T18:31:28.486834167Z" level=info msg="shim disconnected" id=122c64e7615112c7fc572d92f157e650c9330a05f79edec12ff288b0b2106f42 namespace=moby
	Apr 21 18:31:28 addons-519700 dockerd[1337]: time="2024-04-21T18:31:28.486914667Z" level=warning msg="cleaning up after shim disconnected" id=122c64e7615112c7fc572d92f157e650c9330a05f79edec12ff288b0b2106f42 namespace=moby
	Apr 21 18:31:28 addons-519700 dockerd[1337]: time="2024-04-21T18:31:28.486932067Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	611b8259122ce       a416a98b71e22                                                                                                                                29 seconds ago       Exited              helper-pod                               0                   d03fc1052837c       helper-pod-delete-pvc-4f455934-1b66-474d-b61f-c07d9fbf4635
	7d5d877233f41       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:abef4926f3e6f0aa50c968aa954f990a6b0178e04a955293a49d96810c43d0e1                            34 seconds ago       Exited              gadget                                   3                   2bed526eb7aae       gadget-ckvnx
	50399d0035e8e       busybox@sha256:c3839dd800b9eb7603340509769c43e146a74c63dca3045a8e7dc8ee07e53966                                                              45 seconds ago       Exited              busybox                                  0                   fa5753978ff05       test-local-path
	fb83646e62fa8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   725c829508a5a       gcp-auth-5db96cd9b4-gvt2n
	ede127b684d18       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   a13bf4c46851c       csi-hostpathplugin-g5fxg
	0ba5d824b90dc       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             About a minute ago   Running             controller                               0                   0857737e1dde6       ingress-nginx-controller-84df5799c-977q6
	412da99e195e2       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   a13bf4c46851c       csi-hostpathplugin-g5fxg
	7d5e52c445d51       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   a13bf4c46851c       csi-hostpathplugin-g5fxg
	ef2692685c9f9       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   a13bf4c46851c       csi-hostpathplugin-g5fxg
	cd9aa997f4005       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   a13bf4c46851c       csi-hostpathplugin-g5fxg
	bc27762b34e7f       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   2e9ac88bac6f5       csi-hostpath-resizer-0
	203b59159d786       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   abeb6b9ebf3db       csi-hostpath-attacher-0
	f0936f9edbb84       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   a13bf4c46851c       csi-hostpathplugin-g5fxg
	0e07c4e4c93db       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   0ed59f66956b7       snapshot-controller-745499f584-jzwnl
	d6d55317150f8       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   8dcb01ad0aec8       snapshot-controller-745499f584-dphbf
	981ce1a523ca4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   2 minutes ago        Exited              patch                                    0                   59429bdc6a756       ingress-nginx-admission-patch-lm67j
	4e1519e3fed27       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   2 minutes ago        Exited              create                                   0                   e520d64145802       ingress-nginx-admission-create-xhltn
	767f4dc6b2ac4       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   31e8b1d2787dd       local-path-provisioner-8d985888d-95fkk
	e23279c2cbdee       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   7c0bed7649fdf       yakd-dashboard-5ddbf7d777-m99b9
	26141dac6bae1       gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50                               2 minutes ago        Running             cloud-spanner-emulator                   0                   ad20e09aa6318       cloud-spanner-emulator-8677549d7-s5lp5
	07ec12b5377b8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   2b02f6b014e65       kube-ingress-dns-minikube
	cd6a592d3e860       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   37a4dad7bcdbb       nvidia-device-plugin-daemonset-7fzh9
	e3f59132bbdd8       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   1b4e27bf0df82       storage-provisioner
	ed5a37eda3638       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   90d381cd57a9f       coredns-7db6d8ff4d-4sf2z
	546ec02785431       a0bf559e280cf                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   c1ff1477ecd37       kube-proxy-9cznh
	64c71080d693e       c7aad43836fa5                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   83eb7a580a0bd       kube-controller-manager-addons-519700
	b174933890f64       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   67a8d8b90f033       etcd-addons-519700
	e03eb59df668c       259c8277fcbbc                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   a3e6118c56968       kube-scheduler-addons-519700
	484f1f09eccf5       c42f13656d0b2                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   d8acec6ce61b7       kube-apiserver-addons-519700
	
	
	==> controller_ingress [0ba5d824b90d] <==
	W0421 18:30:28.822692       8 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0421 18:30:28.823115       8 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0421 18:30:28.835780       8 main.go:249] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
	I0421 18:30:29.144429       8 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0421 18:30:29.175224       8 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0421 18:30:29.191132       8 nginx.go:265] "Starting NGINX Ingress controller"
	I0421 18:30:29.209426       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3d805118-bc45-4014-a93e-5e095bccb1fb", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0421 18:30:29.220484       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"d0bcb5ba-5a8d-4e75-bd5e-de107e74edd1", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0421 18:30:29.221412       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"d4d71685-7ed0-451f-8978-d6d8cbf8d0be", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0421 18:30:30.396289       8 nginx.go:308] "Starting NGINX process"
	I0421 18:30:30.396695       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0421 18:30:30.397348       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0421 18:30:30.396812       8 nginx.go:328] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0421 18:30:30.420747       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0421 18:30:30.428237       8 status.go:84] "New leader elected" identity="ingress-nginx-controller-84df5799c-977q6"
	I0421 18:30:30.442442       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-84df5799c-977q6" node="addons-519700"
	I0421 18:30:30.544278       8 controller.go:210] "Backend successfully reloaded"
	I0421 18:30:30.544371       8 controller.go:221] "Initial sync, sleeping for 1 second"
	I0421 18:30:30.544798       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-84df5799c-977q6", UID:"fd3e63b6-aba7-4ec1-b281-d1fd726a7c7f", APIVersion:"v1", ResourceVersion:"1250", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         71f78d49f0a496c31d4c19f095469f3f23900f8a
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [ed5a37eda363] <==
	[INFO] 10.244.0.6:53303 - 8998 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136999s
	[INFO] 10.244.0.6:41814 - 27322 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131199s
	[INFO] 10.244.0.6:41814 - 53436 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000236699s
	[INFO] 10.244.0.6:60762 - 15423 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000357098s
	[INFO] 10.244.0.6:60762 - 24380 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000279698s
	[INFO] 10.244.0.6:52307 - 27703 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000178199s
	[INFO] 10.244.0.6:52307 - 4657 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000291698s
	[INFO] 10.244.0.6:60802 - 63861 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000174999s
	[INFO] 10.244.0.6:60802 - 3433 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000899s
	[INFO] 10.244.0.6:60763 - 52072 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000495897s
	[INFO] 10.244.0.6:60763 - 103 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000718s
	[INFO] 10.244.0.6:52202 - 16688 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000722s
	[INFO] 10.244.0.6:52202 - 60469 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000186199s
	[INFO] 10.244.0.6:36612 - 10920 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000210099s
	[INFO] 10.244.0.6:36612 - 13482 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063399s
	[INFO] 10.244.0.22:59865 - 22328 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373698s
	[INFO] 10.244.0.22:47564 - 21834 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001585493s
	[INFO] 10.244.0.22:41893 - 19575 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000309598s
	[INFO] 10.244.0.22:58888 - 38101 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000353599s
	[INFO] 10.244.0.22:37860 - 59324 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000311899s
	[INFO] 10.244.0.22:34677 - 45732 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000194899s
	[INFO] 10.244.0.22:51520 - 20669 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.003289986s
	[INFO] 10.244.0.22:59106 - 50882 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.013904338s
	[INFO] 10.244.0.26:49088 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000409899s
	[INFO] 10.244.0.26:53627 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0002059s
	
	
	==> describe nodes <==
	Name:               addons-519700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-519700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=addons-519700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T18_27_05_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-519700
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-519700"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:27:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-519700
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:31:41 +0000   Sun, 21 Apr 2024 18:26:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:31:41 +0000   Sun, 21 Apr 2024 18:26:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:31:41 +0000   Sun, 21 Apr 2024 18:26:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:31:41 +0000   Sun, 21 Apr 2024 18:27:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.202.1
	  Hostname:    addons-519700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 6633d482d6634871984e4cef8ae22051
	  System UUID:                2d7e6059-8cba-b547-8d38-ce9e612dbb0d
	  Boot ID:                    ce60769b-6e6a-4214-93fc-60f4b69b4b42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-8677549d7-s5lp5      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  gadget                      gadget-ckvnx                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  gcp-auth                    gcp-auth-5db96cd9b4-gvt2n                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  ingress-nginx               ingress-nginx-controller-84df5799c-977q6    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m52s
	  kube-system                 coredns-7db6d8ff4d-4sf2z                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m27s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 csi-hostpathplugin-g5fxg                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 etcd-addons-519700                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-apiserver-addons-519700                250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-addons-519700       200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-proxy-9cznh                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-addons-519700                100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 nvidia-device-plugin-daemonset-7fzh9        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 snapshot-controller-745499f584-dphbf        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 snapshot-controller-745499f584-jzwnl        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  local-path-storage          local-path-provisioner-8d985888d-95fkk      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-m99b9             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m50s (x8 over 4m50s)  kubelet          Node addons-519700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m50s (x8 over 4m50s)  kubelet          Node addons-519700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m50s (x7 over 4m50s)  kubelet          Node addons-519700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s                  kubelet          Node addons-519700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s                  kubelet          Node addons-519700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s                  kubelet          Node addons-519700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m37s                  kubelet          Node addons-519700 status is now: NodeReady
	  Normal  RegisteredNode           4m28s                  node-controller  Node addons-519700 event: Registered Node addons-519700 in Controller
	
	
	==> dmesg <==
	[  +4.941830] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.879621] kauditd_printk_skb: 42 callbacks suppressed
	[  +7.431778] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.026060] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.621023] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.474137] kauditd_printk_skb: 59 callbacks suppressed
	[Apr21 18:28] kauditd_printk_skb: 98 callbacks suppressed
	[ +39.131988] kauditd_printk_skb: 2 callbacks suppressed
	[Apr21 18:29] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.009957] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.114108] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.054445] kauditd_printk_skb: 21 callbacks suppressed
	[Apr21 18:30] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.731495] kauditd_printk_skb: 22 callbacks suppressed
	[ +14.988265] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.687736] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.492246] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.030204] kauditd_printk_skb: 3 callbacks suppressed
	[ +10.920916] kauditd_printk_skb: 14 callbacks suppressed
	[Apr21 18:31] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.196010] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.078563] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.839783] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.837831] kauditd_printk_skb: 22 callbacks suppressed
	[ +16.452313] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [b174933890f6] <==
	{"level":"warn","ts":"2024-04-21T18:30:28.515466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.826551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-04-21T18:30:28.516716Z","caller":"traceutil/trace.go:171","msg":"trace[654062080] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1245; }","duration":"288.120646ms","start":"2024-04-21T18:30:28.228585Z","end":"2024-04-21T18:30:28.516705Z","steps":["trace[654062080] 'agreement among raft nodes before linearized reading'  (duration: 286.770552ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:30:28.516427Z","caller":"traceutil/trace.go:171","msg":"trace[1699837773] transaction","detail":"{read_only:false; response_revision:1245; number_of_response:1; }","duration":"321.6699ms","start":"2024-04-21T18:30:28.194743Z","end":"2024-04-21T18:30:28.516413Z","steps":["trace[1699837773] 'process raft request'  (duration: 318.468814ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:30:28.51737Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T18:30:28.194728Z","time spent":"322.545695ms","remote":"127.0.0.1:39022","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1238 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-04-21T18:30:28.51966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.90169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86245"}
	{"level":"info","ts":"2024-04-21T18:30:28.520209Z","caller":"traceutil/trace.go:171","msg":"trace[1964629382] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1245; }","duration":"186.526888ms","start":"2024-04-21T18:30:28.333673Z","end":"2024-04-21T18:30:28.5202Z","steps":["trace[1964629382] 'agreement among raft nodes before linearized reading'  (duration: 182.187007ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:30:31.965886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.122234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86245"}
	{"level":"info","ts":"2024-04-21T18:30:31.965951Z","caller":"traceutil/trace.go:171","msg":"trace[918889862] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1263; }","duration":"129.223233ms","start":"2024-04-21T18:30:31.836712Z","end":"2024-04-21T18:30:31.965935Z","steps":["trace[918889862] 'range keys from in-memory index tree'  (duration: 128.753036ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:30:35.123972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.423335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4363"}
	{"level":"info","ts":"2024-04-21T18:30:35.124175Z","caller":"traceutil/trace.go:171","msg":"trace[770870696] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1282; }","duration":"150.655034ms","start":"2024-04-21T18:30:34.973499Z","end":"2024-04-21T18:30:35.124154Z","steps":["trace[770870696] 'range keys from in-memory index tree'  (duration: 150.221636ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:30:36.486756Z","caller":"traceutil/trace.go:171","msg":"trace[455475883] linearizableReadLoop","detail":"{readStateIndex:1361; appliedIndex:1358; }","duration":"117.025882ms","start":"2024-04-21T18:30:36.369695Z","end":"2024-04-21T18:30:36.486721Z","steps":["trace[455475883] 'read index received'  (duration: 17.227924ms)","trace[455475883] 'applied index is now lower than readState.Index'  (duration: 99.797058ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T18:30:36.487334Z","caller":"traceutil/trace.go:171","msg":"trace[1666875496] transaction","detail":"{read_only:false; response_revision:1296; number_of_response:1; }","duration":"131.342218ms","start":"2024-04-21T18:30:36.355973Z","end":"2024-04-21T18:30:36.487315Z","steps":["trace[1666875496] 'process raft request'  (duration: 78.571352ms)","trace[1666875496] 'compare'  (duration: 51.688171ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-21T18:30:36.487502Z","caller":"traceutil/trace.go:171","msg":"trace[537339351] transaction","detail":"{read_only:false; response_revision:1298; number_of_response:1; }","duration":"123.216654ms","start":"2024-04-21T18:30:36.364273Z","end":"2024-04-21T18:30:36.48749Z","steps":["trace[537339351] 'process raft request'  (duration: 122.389258ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:30:36.487722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.013777ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-ckvnx\" ","response":"range_response_count:1 size:9995"}
	{"level":"info","ts":"2024-04-21T18:30:36.490764Z","caller":"traceutil/trace.go:171","msg":"trace[2126473930] range","detail":"{range_begin:/registry/pods/gadget/gadget-ckvnx; range_end:; response_count:1; response_revision:1298; }","duration":"121.134264ms","start":"2024-04-21T18:30:36.369616Z","end":"2024-04-21T18:30:36.49075Z","steps":["trace[2126473930] 'agreement among raft nodes before linearized reading'  (duration: 117.976578ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:30:36.488557Z","caller":"traceutil/trace.go:171","msg":"trace[1487704236] transaction","detail":"{read_only:false; response_revision:1297; number_of_response:1; }","duration":"132.513113ms","start":"2024-04-21T18:30:36.356034Z","end":"2024-04-21T18:30:36.488547Z","steps":["trace[1487704236] 'process raft request'  (duration: 130.541821ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:30:45.682473Z","caller":"traceutil/trace.go:171","msg":"trace[925164621] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"193.181584ms","start":"2024-04-21T18:30:45.489271Z","end":"2024-04-21T18:30:45.682453Z","steps":["trace[925164621] 'process raft request'  (duration: 193.030584ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:30:47.572683Z","caller":"traceutil/trace.go:171","msg":"trace[1596439499] transaction","detail":"{read_only:false; response_revision:1374; number_of_response:1; }","duration":"243.608593ms","start":"2024-04-21T18:30:47.329036Z","end":"2024-04-21T18:30:47.572644Z","steps":["trace[1596439499] 'process raft request'  (duration: 243.452193ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:30:57.108207Z","caller":"traceutil/trace.go:171","msg":"trace[1904010627] linearizableReadLoop","detail":"{readStateIndex:1462; appliedIndex:1461; }","duration":"174.536125ms","start":"2024-04-21T18:30:56.933647Z","end":"2024-04-21T18:30:57.108183Z","steps":["trace[1904010627] 'read index received'  (duration: 173.584128ms)","trace[1904010627] 'applied index is now lower than readState.Index'  (duration: 950.197µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T18:30:57.109119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.477621ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12917"}
	{"level":"info","ts":"2024-04-21T18:30:57.10918Z","caller":"traceutil/trace.go:171","msg":"trace[1804665959] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1394; }","duration":"175.59062ms","start":"2024-04-21T18:30:56.933579Z","end":"2024-04-21T18:30:57.10917Z","steps":["trace[1804665959] 'agreement among raft nodes before linearized reading'  (duration: 175.248121ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:30:57.109554Z","caller":"traceutil/trace.go:171","msg":"trace[1224880661] transaction","detail":"{read_only:false; response_revision:1394; number_of_response:1; }","duration":"177.942011ms","start":"2024-04-21T18:30:56.931602Z","end":"2024-04-21T18:30:57.109544Z","steps":["trace[1224880661] 'process raft request'  (duration: 175.76662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T18:30:57.110296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.668301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12917"}
	{"level":"info","ts":"2024-04-21T18:30:57.110829Z","caller":"traceutil/trace.go:171","msg":"trace[1644903947] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1394; }","duration":"129.261799ms","start":"2024-04-21T18:30:56.981542Z","end":"2024-04-21T18:30:57.110804Z","steps":["trace[1644903947] 'agreement among raft nodes before linearized reading'  (duration: 128.616302ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T18:31:09.74114Z","caller":"traceutil/trace.go:171","msg":"trace[526770072] transaction","detail":"{read_only:false; response_revision:1490; number_of_response:1; }","duration":"212.644951ms","start":"2024-04-21T18:31:09.528473Z","end":"2024-04-21T18:31:09.741118Z","steps":["trace[526770072] 'process raft request'  (duration: 188.810524ms)","trace[526770072] 'compare'  (duration: 22.97143ms)"],"step_count":2}
	
	
	==> gcp-auth [fb83646e62fa] <==
	2024/04/21 18:30:35 GCP Auth Webhook started!
	2024/04/21 18:30:37 Ready to marshal response ...
	2024/04/21 18:30:37 Ready to write response ...
	2024/04/21 18:30:37 Ready to marshal response ...
	2024/04/21 18:30:37 Ready to write response ...
	2024/04/21 18:30:38 Ready to marshal response ...
	2024/04/21 18:30:38 Ready to write response ...
	2024/04/21 18:30:47 Ready to marshal response ...
	2024/04/21 18:30:47 Ready to write response ...
	2024/04/21 18:31:04 Ready to marshal response ...
	2024/04/21 18:31:04 Ready to write response ...
	2024/04/21 18:31:13 Ready to marshal response ...
	2024/04/21 18:31:13 Ready to write response ...
	2024/04/21 18:31:17 Ready to marshal response ...
	2024/04/21 18:31:17 Ready to write response ...
	
	
	==> kernel <==
	 18:31:45 up 6 min,  0 users,  load average: 2.35, 2.24, 1.07
	Linux addons-519700 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [484f1f09eccf] <==
	Trace[665808633]:  ---"Txn call completed" 583ms (18:29:01.974)]
	Trace[665808633]: [587.618337ms] [587.618337ms] END
	E0421 18:29:10.961210       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.47.227:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.47.227:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.47.227:443: connect: connection refused
	W0421 18:29:10.961456       1 handler_proxy.go:93] no RequestInfo found in the context
	E0421 18:29:10.961511       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0421 18:29:10.964406       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.47.227:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.47.227:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.47.227:443: connect: connection refused
	E0421 18:29:10.967663       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.47.227:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.47.227:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.47.227:443: connect: connection refused
	I0421 18:29:11.086992       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0421 18:29:13.823780       1 trace.go:236] Trace[1264923203]: "List" accept:application/json, */*,audit-id:716681b6-5312-4f38-b2ca-b6b2d97feec7,client:172.27.192.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (21-Apr-2024 18:29:13.161) (total time: 661ms):
	Trace[1264923203]: ["List(recursive=true) etcd3" audit-id:716681b6-5312-4f38-b2ca-b6b2d97feec7,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 661ms (18:29:13.161)]
	Trace[1264923203]: [661.937295ms] [661.937295ms] END
	I0421 18:29:14.105693       1 trace.go:236] Trace[513540946]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.202.1,type:*v1.Endpoints,resource:apiServerIPInfo (21-Apr-2024 18:29:13.381) (total time: 724ms):
	Trace[513540946]: ---"initial value restored" 439ms (18:29:13.821)
	Trace[513540946]: ---"Transaction prepared" 251ms (18:29:14.072)
	Trace[513540946]: [724.034573ms] [724.034573ms] END
	I0421 18:29:24.021342       1 trace.go:236] Trace[585519503]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.202.1,type:*v1.Endpoints,resource:apiServerIPInfo (21-Apr-2024 18:29:23.382) (total time: 638ms):
	Trace[585519503]: ---"Transaction prepared" 188ms (18:29:23.572)
	Trace[585519503]: ---"Txn call completed" 448ms (18:29:24.021)
	Trace[585519503]: [638.646222ms] [638.646222ms] END
	I0421 18:30:28.193225       1 trace.go:236] Trace[669070230]: "List" accept:application/json, */*,audit-id:2d668738-e23e-44df-96e6-11789ea3cea7,client:172.27.192.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (21-Apr-2024 18:30:27.657) (total time: 535ms):
	Trace[669070230]: ["List(recursive=true) etcd3" audit-id:2d668738-e23e-44df-96e6-11789ea3cea7,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 535ms (18:30:27.657)]
	Trace[669070230]: [535.382373ms] [535.382373ms] END
	I0421 18:31:06.210900       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0421 18:31:11.974541       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [64c71080d693] <==
	I0421 18:30:02.338497       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0421 18:30:02.835088       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0421 18:30:02.877577       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0421 18:30:02.921399       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0421 18:30:02.932737       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0421 18:30:03.292525       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0421 18:30:03.321446       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0421 18:30:03.342846       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0421 18:30:03.404773       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0421 18:30:04.413585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="17.675333ms"
	I0421 18:30:04.413935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="75.4µs"
	I0421 18:30:28.926001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="193.199µs"
	I0421 18:30:32.035338       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0421 18:30:32.135155       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0421 18:30:33.088996       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0421 18:30:33.476513       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0421 18:30:36.495842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="145.900953ms"
	I0421 18:30:36.496832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="40.5µs"
	I0421 18:30:38.860615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="52.558266ms"
	I0421 18:30:38.861009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="175.299µs"
	I0421 18:30:58.706421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="7.8µs"
	I0421 18:31:22.216987       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="10.9µs"
	I0421 18:31:28.023295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="8.2µs"
	I0421 18:31:44.566472       1 stateful_set.go:458] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0421 18:31:44.816696       1 stateful_set.go:458] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	
	
	==> kube-proxy [546ec0278543] <==
	I0421 18:27:23.008016       1 server_linux.go:69] "Using iptables proxy"
	I0421 18:27:23.187460       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.202.1"]
	I0421 18:27:23.876905       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 18:27:23.877141       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 18:27:23.877179       1 server_linux.go:165] "Using iptables Proxier"
	I0421 18:27:23.948590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 18:27:23.949236       1 server.go:872] "Version info" version="v1.30.0"
	I0421 18:27:23.949744       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:27:23.953673       1 config.go:192] "Starting service config controller"
	I0421 18:27:24.020814       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 18:27:23.970877       1 config.go:101] "Starting endpoint slice config controller"
	I0421 18:27:24.021225       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 18:27:24.006522       1 config.go:319] "Starting node config controller"
	I0421 18:27:24.021248       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 18:27:24.125430       1 shared_informer.go:320] Caches are synced for node config
	I0421 18:27:24.125709       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 18:27:24.127418       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [e03eb59df668] <==
	W0421 18:27:01.978544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 18:27:01.978683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 18:27:02.001986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 18:27:02.002338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 18:27:02.003846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 18:27:02.004185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 18:27:02.005369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 18:27:02.005762       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 18:27:02.019022       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 18:27:02.019264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 18:27:02.034818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 18:27:02.035220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 18:27:02.158361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 18:27:02.158477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 18:27:02.302473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 18:27:02.302948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 18:27:02.305497       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 18:27:02.305545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 18:27:02.358700       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 18:27:02.358921       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 18:27:02.385261       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 18:27:02.385325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0421 18:27:02.554260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 18:27:02.554959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0421 18:27:03.941544       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 18:31:26 addons-519700 kubelet[2121]: I0421 18:31:26.897172    2121 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f636b82-7e54-4969-b5fa-7ec7fe29c7c4-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5f636b82-7e54-4969-b5fa-7ec7fe29c7c4" (UID: "5f636b82-7e54-4969-b5fa-7ec7fe29c7c4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 21 18:31:26 addons-519700 kubelet[2121]: I0421 18:31:26.900860    2121 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f636b82-7e54-4969-b5fa-7ec7fe29c7c4-kube-api-access-pkx2b" (OuterVolumeSpecName: "kube-api-access-pkx2b") pod "5f636b82-7e54-4969-b5fa-7ec7fe29c7c4" (UID: "5f636b82-7e54-4969-b5fa-7ec7fe29c7c4"). InnerVolumeSpecName "kube-api-access-pkx2b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 21 18:31:26 addons-519700 kubelet[2121]: I0421 18:31:26.906841    2121 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^56fafbe6-000d-11ef-9d98-1a0a4d680166" (OuterVolumeSpecName: "task-pv-storage") pod "5f636b82-7e54-4969-b5fa-7ec7fe29c7c4" (UID: "5f636b82-7e54-4969-b5fa-7ec7fe29c7c4"). InnerVolumeSpecName "pvc-f6c84e8b-4473-45f2-b416-af8d30331618". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Apr 21 18:31:26 addons-519700 kubelet[2121]: I0421 18:31:26.997820    2121 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pkx2b\" (UniqueName: \"kubernetes.io/projected/5f636b82-7e54-4969-b5fa-7ec7fe29c7c4-kube-api-access-pkx2b\") on node \"addons-519700\" DevicePath \"\""
	Apr 21 18:31:26 addons-519700 kubelet[2121]: I0421 18:31:26.997921    2121 reconciler_common.go:282] "operationExecutor.UnmountDevice started for volume \"pvc-f6c84e8b-4473-45f2-b416-af8d30331618\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^56fafbe6-000d-11ef-9d98-1a0a4d680166\") on node \"addons-519700\" "
	Apr 21 18:31:26 addons-519700 kubelet[2121]: I0421 18:31:26.997943    2121 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5f636b82-7e54-4969-b5fa-7ec7fe29c7c4-gcp-creds\") on node \"addons-519700\" DevicePath \"\""
	Apr 21 18:31:27 addons-519700 kubelet[2121]: I0421 18:31:27.007938    2121 operation_generator.go:1001] UnmountDevice succeeded for volume "pvc-f6c84e8b-4473-45f2-b416-af8d30331618" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^56fafbe6-000d-11ef-9d98-1a0a4d680166") on node "addons-519700"
	Apr 21 18:31:27 addons-519700 kubelet[2121]: I0421 18:31:27.098983    2121 reconciler_common.go:289] "Volume detached for volume \"pvc-f6c84e8b-4473-45f2-b416-af8d30331618\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^56fafbe6-000d-11ef-9d98-1a0a4d680166\") on node \"addons-519700\" DevicePath \"\""
	Apr 21 18:31:27 addons-519700 kubelet[2121]: I0421 18:31:27.141814    2121 scope.go:117] "RemoveContainer" containerID="7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5"
	Apr 21 18:31:27 addons-519700 kubelet[2121]: I0421 18:31:27.193754    2121 scope.go:117] "RemoveContainer" containerID="7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5"
	Apr 21 18:31:27 addons-519700 kubelet[2121]: E0421 18:31:27.195774    2121 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5" containerID="7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5"
	Apr 21 18:31:27 addons-519700 kubelet[2121]: I0421 18:31:27.195810    2121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5"} err="failed to get container status \"7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7c3efd900253f930310f904a710380e7cec2fc3ab2b5e5794292e878063514a5"
	Apr 21 18:31:28 addons-519700 kubelet[2121]: I0421 18:31:28.717250    2121 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdpsr\" (UniqueName: \"kubernetes.io/projected/33c57d3b-8b9e-4319-95d2-4d55044ded10-kube-api-access-tdpsr\") pod \"33c57d3b-8b9e-4319-95d2-4d55044ded10\" (UID: \"33c57d3b-8b9e-4319-95d2-4d55044ded10\") "
	Apr 21 18:31:28 addons-519700 kubelet[2121]: I0421 18:31:28.730696    2121 scope.go:117] "RemoveContainer" containerID="7d5d877233f41964a3807344debc243440b9d7f7aa419480f679603dcfd532fc"
	Apr 21 18:31:28 addons-519700 kubelet[2121]: E0421 18:31:28.731319    2121 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-ckvnx_gadget(252e89d7-799a-4ef2-9c6f-b82714e41de5)\"" pod="gadget/gadget-ckvnx" podUID="252e89d7-799a-4ef2-9c6f-b82714e41de5"
	Apr 21 18:31:28 addons-519700 kubelet[2121]: I0421 18:31:28.731593    2121 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c57d3b-8b9e-4319-95d2-4d55044ded10-kube-api-access-tdpsr" (OuterVolumeSpecName: "kube-api-access-tdpsr") pod "33c57d3b-8b9e-4319-95d2-4d55044ded10" (UID: "33c57d3b-8b9e-4319-95d2-4d55044ded10"). InnerVolumeSpecName "kube-api-access-tdpsr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 21 18:31:28 addons-519700 kubelet[2121]: I0421 18:31:28.758324    2121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f636b82-7e54-4969-b5fa-7ec7fe29c7c4" path="/var/lib/kubelet/pods/5f636b82-7e54-4969-b5fa-7ec7fe29c7c4/volumes"
	Apr 21 18:31:28 addons-519700 kubelet[2121]: I0421 18:31:28.819638    2121 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tdpsr\" (UniqueName: \"kubernetes.io/projected/33c57d3b-8b9e-4319-95d2-4d55044ded10-kube-api-access-tdpsr\") on node \"addons-519700\" DevicePath \"\""
	Apr 21 18:31:29 addons-519700 kubelet[2121]: I0421 18:31:29.327905    2121 scope.go:117] "RemoveContainer" containerID="30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662"
	Apr 21 18:31:29 addons-519700 kubelet[2121]: I0421 18:31:29.390731    2121 scope.go:117] "RemoveContainer" containerID="30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662"
	Apr 21 18:31:29 addons-519700 kubelet[2121]: E0421 18:31:29.392594    2121 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662" containerID="30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662"
	Apr 21 18:31:29 addons-519700 kubelet[2121]: I0421 18:31:29.392698    2121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662"} err="failed to get container status \"30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662\": rpc error: code = Unknown desc = Error response from daemon: No such container: 30a14c20a3a59bca3e9250a3ee0a5cd8dd54f863985d3fc8048c85c63472c662"
	Apr 21 18:31:30 addons-519700 kubelet[2121]: I0421 18:31:30.751998    2121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33c57d3b-8b9e-4319-95d2-4d55044ded10" path="/var/lib/kubelet/pods/33c57d3b-8b9e-4319-95d2-4d55044ded10/volumes"
	Apr 21 18:31:43 addons-519700 kubelet[2121]: I0421 18:31:43.729747    2121 scope.go:117] "RemoveContainer" containerID="7d5d877233f41964a3807344debc243440b9d7f7aa419480f679603dcfd532fc"
	Apr 21 18:31:43 addons-519700 kubelet[2121]: E0421 18:31:43.730845    2121 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-ckvnx_gadget(252e89d7-799a-4ef2-9c6f-b82714e41de5)\"" pod="gadget/gadget-ckvnx" podUID="252e89d7-799a-4ef2-9c6f-b82714e41de5"
	
	
	==> storage-provisioner [e3f59132bbdd] <==
	I0421 18:27:50.791493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 18:27:50.977362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 18:27:50.977492       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 18:27:51.179116       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 18:27:51.179391       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-519700_03de1a1a-55df-4db8-aafe-5425ef7d3781!
	I0421 18:27:51.180945       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf296e85-3af6-4250-bcfa-fdec64cccce1", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-519700_03de1a1a-55df-4db8-aafe-5425ef7d3781 became leader
	I0421 18:27:51.279956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-519700_03de1a1a-55df-4db8-aafe-5425ef7d3781!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:31:35.625113   12364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-519700 -n addons-519700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-519700 -n addons-519700: (13.2221147s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-519700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-xhltn ingress-nginx-admission-patch-lm67j
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-519700 describe pod ingress-nginx-admission-create-xhltn ingress-nginx-admission-patch-lm67j
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-519700 describe pod ingress-nginx-admission-create-xhltn ingress-nginx-admission-patch-lm67j: exit status 1 (187.2631ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xhltn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lm67j" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-519700 describe pod ingress-nginx-admission-create-xhltn ingress-nginx-admission-patch-lm67j: exit status 1
--- FAIL: TestAddons/parallel/Registry (84.36s)

                                                
                                    
x
+
TestCertExpiration (1126.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-104900 --memory=2048 --cert-expiration=3m --driver=hyperv
E0421 20:57:54.269213   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-104900 --memory=2048 --cert-expiration=3m --driver=hyperv: (8m41.8584694s)
E0421 21:07:54.271662   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-104900 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-104900 --memory=2048 --cert-expiration=8760h --driver=hyperv: exit status 90 (3m44.1783045s)

                                                
                                                
-- stdout --
	* [cert-expiration-104900] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "cert-expiration-104900" primary control-plane node in "cert-expiration-104900" cluster
	* Updating the running hyperv "cert-expiration-104900" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:08:59.423657     804 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 21 21:04:35 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:04:35 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:35.995067321Z" level=info msg="Starting up"
	Apr 21 21:04:35 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:35.997095700Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:36.002170348Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.038577088Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073401644Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073493643Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073571043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073690841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073809040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073946339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074176637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074280935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074351035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074367035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074466634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074834230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079442485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079584783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079753381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079859980Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080052778Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080206577Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080354676Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.105453528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.105909923Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106080921Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106213420Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106239320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106426518Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107033112Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107277310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107412308Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107434508Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107452308Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107468708Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107484408Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107501407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107519107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107542107Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107579307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107595507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107618106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107634506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107648606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107663106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107679306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107697505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107724205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107739305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107754505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107774405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107788805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107904803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108031202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108056302Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108080702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108094902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108113801Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108186701Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108227400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108243100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108254900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108431598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108514297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108537097Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108811094Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109069492Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109207591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109236590Z" level=info msg="containerd successfully booted in 0.072042s"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.097237691Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.133628454Z" level=info msg="Loading containers: start."
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.449384281Z" level=info msg="Loading containers: done."
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.476864858Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.482641311Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.610585969Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:04:37 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.613225747Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:10 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.095739570Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097376572Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097537272Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097680772Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097715572Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.179788181Z" level=info msg="Starting up"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.181143682Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.187188689Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.218459824Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263633374Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263773874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263942175Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263984775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264034475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264052175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264263475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264381375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264407475Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264420975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264462275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264621675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268378380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268430380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268593880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268702380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268737780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268759880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268773380Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269055780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269181181Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269216781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269235581Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269252681Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269320981Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269593481Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269701581Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269722981Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269739681Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269755781Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269777581Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269911581Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269949981Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269970581Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270095882Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270127682Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270143982Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270173782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270198882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270216082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270234882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270250682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270266082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270279782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270299482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270322182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270341182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270356982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270371582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270386482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270406382Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270432182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270448382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270479882Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270563182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270602582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270619182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270632282Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270701082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270721482Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270787282Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271399083Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271634583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271719883Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271763783Z" level=info msg="containerd successfully booted in 0.054391s"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.239054364Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.260426288Z" level=info msg="Loading containers: start."
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.472425724Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.569331433Z" level=info msg="Loading containers: done."
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.598001665Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.598148065Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.653884527Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:05:12 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.655959229Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:25 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.619424508Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.623929613Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624073513Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624129213Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624159513Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.706185522Z" level=info msg="Starting up"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.707570823Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.709504626Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.748674569Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780570405Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780647905Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780709405Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780873205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780924005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780965605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781172506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781283806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781308106Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781322006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781353706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781580306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785359810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785407510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785550610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785590711Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785655211Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785682011Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785694911Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786039511Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786166611Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786232411Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786256711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786280811Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786350011Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787328812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787528713Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787620813Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787687813Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787750513Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789533015Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789594815Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789616415Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789639315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789657215Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789672715Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789686815Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789710315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789726715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789743815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789856415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789884415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789900715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789915015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789938115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789956015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789974615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789988915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790003215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790018315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790036816Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790079216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790129116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790154116Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790198916Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790242316Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790259416Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790278316Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790366416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790408816Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790423916Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790840916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790962717Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.791043317Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.791065817Z" level=info msg="containerd successfully booted in 0.043136s"
	Apr 21 21:05:27 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:27.760560100Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.609859948Z" level=info msg="Loading containers: start."
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.816500979Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.906680780Z" level=info msg="Loading containers: done."
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.930907807Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.931054107Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.976585958Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.976666158Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:05:28 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281763599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281889999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281906499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.282004499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.299976430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.300328031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.300419731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.302504135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393307490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393786091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393893091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.394254592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.409093218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.410697420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.413891626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.414390727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947040941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947175241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947209542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.948217843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077141962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077323562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077348762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077540762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.110319517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.111301519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.111889920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.112281121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.126922645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.127331946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.127639147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.128539748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.057659703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.059092605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.059590805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.066668914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178063354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178380854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178420754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178606854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610555295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610690696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610707996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610872496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.843436987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.844559189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.855781803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.856658804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.864785414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.865602115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.871949623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.872123423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549111764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549374665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549474765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.550353867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:06:33.247244254Z" level=info msg="ignoring event" container=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.248202360Z" level=info msg="shim disconnected" id=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 namespace=moby
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.248900365Z" level=warning msg="cleaning up after shim disconnected" id=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 namespace=moby
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.250534776Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204432664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204751167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204916268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.206642079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:31 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:11:31 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:31.933256937Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.186342096Z" level=info msg="shim disconnected" id=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.187600286Z" level=warning msg="cleaning up after shim disconnected" id=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.187847284Z" level=info msg="ignoring event" container=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.188058082Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.255949235Z" level=info msg="ignoring event" container=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257404823Z" level=info msg="shim disconnected" id=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257736121Z" level=warning msg="cleaning up after shim disconnected" id=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257911919Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.264762864Z" level=info msg="shim disconnected" id=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.264943062Z" level=warning msg="cleaning up after shim disconnected" id=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.265078361Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.266193852Z" level=info msg="ignoring event" container=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.268712732Z" level=info msg="ignoring event" container=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269227928Z" level=info msg="shim disconnected" id=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269364127Z" level=warning msg="cleaning up after shim disconnected" id=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269588325Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274409586Z" level=info msg="shim disconnected" id=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.274434286Z" level=info msg="ignoring event" container=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274478286Z" level=warning msg="cleaning up after shim disconnected" id=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274496785Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.281792927Z" level=info msg="ignoring event" container=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.282136724Z" level=info msg="shim disconnected" id=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.283055216Z" level=info msg="ignoring event" container=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.302886057Z" level=warning msg="cleaning up after shim disconnected" id=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.304848741Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316457047Z" level=info msg="shim disconnected" id=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316582446Z" level=warning msg="cleaning up after shim disconnected" id=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316680445Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.284291306Z" level=info msg="shim disconnected" id=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.321162409Z" level=info msg="ignoring event" container=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.321798204Z" level=info msg="ignoring event" container=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.324036986Z" level=warning msg="cleaning up after shim disconnected" id=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.324217384Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.332933614Z" level=info msg="ignoring event" container=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.333133213Z" level=info msg="ignoring event" container=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.333476910Z" level=info msg="ignoring event" container=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.322438799Z" level=info msg="shim disconnected" id=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343857926Z" level=warning msg="cleaning up after shim disconnected" id=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343896826Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345006117Z" level=info msg="shim disconnected" id=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345284215Z" level=warning msg="cleaning up after shim disconnected" id=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345610212Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343772027Z" level=info msg="shim disconnected" id=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.363712166Z" level=warning msg="cleaning up after shim disconnected" id=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.363864065Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.365833449Z" level=info msg="shim disconnected" id=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.366107647Z" level=warning msg="cleaning up after shim disconnected" id=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.366225546Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:37.095380412Z" level=info msg="ignoring event" container=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095781808Z" level=info msg="shim disconnected" id=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095841908Z" level=warning msg="cleaning up after shim disconnected" id=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095853408Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.129671635Z" level=warning msg="cleanup warnings time=\"2024-04-21T21:11:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.067623447Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.116791123Z" level=info msg="shim disconnected" id=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.117447089Z" level=warning msg="cleaning up after shim disconnected" id=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.117864257Z" level=info msg="ignoring event" container=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.118081745Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.195261646Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196084979Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196537662Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196602388Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: docker.service: Consumed 10.039s CPU time.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:11:43 cert-expiration-104900 dockerd[4632]: time="2024-04-21T21:11:43.282728411Z" level=info msg="Starting up"
	Apr 21 21:12:43 cert-expiration-104900 dockerd[4632]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-windows-amd64.exe start -p cert-expiration-104900 --memory=2048 --cert-expiration=8760h --driver=hyperv" : exit status 90
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-104900] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "cert-expiration-104900" primary control-plane node in "cert-expiration-104900" cluster
	* Updating the running hyperv "cert-expiration-104900" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:08:59.423657     804 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 21 21:04:35 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:04:35 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:35.995067321Z" level=info msg="Starting up"
	Apr 21 21:04:35 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:35.997095700Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:36.002170348Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.038577088Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073401644Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073493643Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073571043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073690841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073809040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073946339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074176637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074280935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074351035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074367035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074466634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074834230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079442485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079584783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079753381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079859980Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080052778Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080206577Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080354676Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.105453528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.105909923Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106080921Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106213420Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106239320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106426518Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107033112Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107277310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107412308Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107434508Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107452308Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107468708Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107484408Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107501407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107519107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107542107Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107579307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107595507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107618106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107634506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107648606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107663106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107679306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107697505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107724205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107739305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107754505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107774405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107788805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107904803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108031202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108056302Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108080702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108094902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108113801Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108186701Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108227400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108243100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108254900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108431598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108514297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108537097Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108811094Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109069492Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109207591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109236590Z" level=info msg="containerd successfully booted in 0.072042s"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.097237691Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.133628454Z" level=info msg="Loading containers: start."
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.449384281Z" level=info msg="Loading containers: done."
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.476864858Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.482641311Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.610585969Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:04:37 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.613225747Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:10 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.095739570Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097376572Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097537272Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097680772Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097715572Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.179788181Z" level=info msg="Starting up"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.181143682Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.187188689Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.218459824Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263633374Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263773874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263942175Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263984775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264034475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264052175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264263475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264381375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264407475Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264420975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264462275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264621675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268378380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268430380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268593880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268702380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268737780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268759880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268773380Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269055780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269181181Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269216781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269235581Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269252681Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269320981Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269593481Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269701581Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269722981Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269739681Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269755781Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269777581Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269911581Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269949981Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269970581Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270095882Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270127682Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270143982Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270173782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270198882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270216082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270234882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270250682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270266082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270279782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270299482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270322182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270341182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270356982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270371582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270386482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270406382Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270432182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270448382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270479882Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270563182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270602582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270619182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270632282Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270701082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270721482Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270787282Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271399083Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271634583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271719883Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271763783Z" level=info msg="containerd successfully booted in 0.054391s"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.239054364Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.260426288Z" level=info msg="Loading containers: start."
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.472425724Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.569331433Z" level=info msg="Loading containers: done."
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.598001665Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.598148065Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.653884527Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:05:12 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.655959229Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:25 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.619424508Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.623929613Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624073513Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624129213Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624159513Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.706185522Z" level=info msg="Starting up"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.707570823Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.709504626Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.748674569Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780570405Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780647905Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780709405Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780873205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780924005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780965605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781172506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781283806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781308106Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781322006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781353706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781580306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785359810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785407510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785550610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785590711Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785655211Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785682011Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785694911Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786039511Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786166611Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786232411Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786256711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786280811Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786350011Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787328812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787528713Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787620813Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787687813Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787750513Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789533015Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789594815Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789616415Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789639315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789657215Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789672715Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789686815Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789710315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789726715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789743815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789856415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789884415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789900715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789915015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789938115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789956015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789974615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789988915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790003215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790018315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790036816Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790079216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790129116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790154116Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790198916Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790242316Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790259416Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790278316Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790366416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790408816Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790423916Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790840916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790962717Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.791043317Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.791065817Z" level=info msg="containerd successfully booted in 0.043136s"
	Apr 21 21:05:27 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:27.760560100Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.609859948Z" level=info msg="Loading containers: start."
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.816500979Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.906680780Z" level=info msg="Loading containers: done."
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.930907807Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.931054107Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.976585958Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.976666158Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:05:28 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281763599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281889999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281906499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.282004499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.299976430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.300328031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.300419731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.302504135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393307490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393786091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393893091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.394254592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.409093218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.410697420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.413891626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.414390727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947040941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947175241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947209542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.948217843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077141962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077323562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077348762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077540762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.110319517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.111301519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.111889920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.112281121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.126922645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.127331946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.127639147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.128539748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.057659703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.059092605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.059590805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.066668914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178063354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178380854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178420754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178606854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610555295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610690696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610707996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610872496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.843436987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.844559189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.855781803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.856658804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.864785414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.865602115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.871949623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.872123423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549111764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549374665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549474765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.550353867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:06:33.247244254Z" level=info msg="ignoring event" container=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.248202360Z" level=info msg="shim disconnected" id=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 namespace=moby
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.248900365Z" level=warning msg="cleaning up after shim disconnected" id=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 namespace=moby
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.250534776Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204432664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204751167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204916268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.206642079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:31 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:11:31 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:31.933256937Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.186342096Z" level=info msg="shim disconnected" id=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.187600286Z" level=warning msg="cleaning up after shim disconnected" id=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.187847284Z" level=info msg="ignoring event" container=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.188058082Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.255949235Z" level=info msg="ignoring event" container=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257404823Z" level=info msg="shim disconnected" id=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257736121Z" level=warning msg="cleaning up after shim disconnected" id=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257911919Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.264762864Z" level=info msg="shim disconnected" id=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.264943062Z" level=warning msg="cleaning up after shim disconnected" id=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.265078361Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.266193852Z" level=info msg="ignoring event" container=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.268712732Z" level=info msg="ignoring event" container=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269227928Z" level=info msg="shim disconnected" id=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269364127Z" level=warning msg="cleaning up after shim disconnected" id=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269588325Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274409586Z" level=info msg="shim disconnected" id=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.274434286Z" level=info msg="ignoring event" container=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274478286Z" level=warning msg="cleaning up after shim disconnected" id=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274496785Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.281792927Z" level=info msg="ignoring event" container=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.282136724Z" level=info msg="shim disconnected" id=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.283055216Z" level=info msg="ignoring event" container=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.302886057Z" level=warning msg="cleaning up after shim disconnected" id=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.304848741Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316457047Z" level=info msg="shim disconnected" id=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316582446Z" level=warning msg="cleaning up after shim disconnected" id=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316680445Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.284291306Z" level=info msg="shim disconnected" id=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.321162409Z" level=info msg="ignoring event" container=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.321798204Z" level=info msg="ignoring event" container=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.324036986Z" level=warning msg="cleaning up after shim disconnected" id=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.324217384Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.332933614Z" level=info msg="ignoring event" container=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.333133213Z" level=info msg="ignoring event" container=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.333476910Z" level=info msg="ignoring event" container=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.322438799Z" level=info msg="shim disconnected" id=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343857926Z" level=warning msg="cleaning up after shim disconnected" id=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343896826Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345006117Z" level=info msg="shim disconnected" id=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345284215Z" level=warning msg="cleaning up after shim disconnected" id=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345610212Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343772027Z" level=info msg="shim disconnected" id=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.363712166Z" level=warning msg="cleaning up after shim disconnected" id=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.363864065Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.365833449Z" level=info msg="shim disconnected" id=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.366107647Z" level=warning msg="cleaning up after shim disconnected" id=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.366225546Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:37.095380412Z" level=info msg="ignoring event" container=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095781808Z" level=info msg="shim disconnected" id=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095841908Z" level=warning msg="cleaning up after shim disconnected" id=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095853408Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.129671635Z" level=warning msg="cleanup warnings time=\"2024-04-21T21:11:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.067623447Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.116791123Z" level=info msg="shim disconnected" id=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.117447089Z" level=warning msg="cleaning up after shim disconnected" id=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.117864257Z" level=info msg="ignoring event" container=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.118081745Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.195261646Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196084979Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196537662Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196602388Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: docker.service: Consumed 10.039s CPU time.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:11:43 cert-expiration-104900 dockerd[4632]: time="2024-04-21T21:11:43.282728411Z" level=info msg="Starting up"
	Apr 21 21:12:43 cert-expiration-104900 dockerd[4632]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-04-21 21:12:43.6730351 +0000 UTC m=+10176.031168301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-104900 -n cert-expiration-104900
E0421 21:12:54.283243   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-104900 -n cert-expiration-104900: exit status 2 (12.727551s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:12:43.812649    5820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-expiration-104900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p cert-expiration-104900 logs -n 25: (1m47.8942435s)
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args               |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-190300 sudo            | cilium-190300             | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:55 UTC |                     |
	|         | containerd config dump           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-190300 sudo            | cilium-190300             | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:55 UTC |                     |
	|         | systemctl status crio --all      |                           |                   |         |                     |                     |
	|         | --full --no-pager                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-190300 sudo            | cilium-190300             | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:55 UTC |                     |
	|         | systemctl cat crio --no-pager    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-190300 sudo find       | cilium-190300             | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:55 UTC |                     |
	|         | /etc/crio -type f -exec sh -c    |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;             |                           |                   |         |                     |                     |
	| ssh     | -p cilium-190300 sudo crio       | cilium-190300             | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:55 UTC |                     |
	|         | config                           |                           |                   |         |                     |                     |
	| delete  | -p cilium-190300                 | cilium-190300             | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:55 UTC | 21 Apr 24 20:55 UTC |
	| start   | -p kubernetes-upgrade-208700     | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:55 UTC | 21 Apr 24 21:03 UTC |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-149100        | force-systemd-flag-149100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:56 UTC | 21 Apr 24 20:56 UTC |
	|         | ssh docker info --format         |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-149100     | force-systemd-flag-149100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:56 UTC | 21 Apr 24 20:57 UTC |
	| start   | -p cert-expiration-104900        | cert-expiration-104900    | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:57 UTC | 21 Apr 24 21:05 UTC |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m             |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-043400        | running-upgrade-043400    | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:58 UTC | 21 Apr 24 21:07 UTC |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-214100         | force-systemd-env-214100  | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:01 UTC | 21 Apr 24 21:01 UTC |
	|         | ssh docker info --format         |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-214100      | force-systemd-env-214100  | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:01 UTC | 21 Apr 24 21:02 UTC |
	| start   | -p docker-flags-064200           | docker-flags-064200       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:02 UTC | 21 Apr 24 21:09 UTC |
	|         | --cache-images=false             |                           |                   |         |                     |                     |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --install-addons=false           |                           |                   |         |                     |                     |
	|         | --wait=false                     |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR             |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT             |                           |                   |         |                     |                     |
	|         | --docker-opt=debug               |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true            |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-208700     | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:03 UTC | 21 Apr 24 21:04 UTC |
	| start   | -p kubernetes-upgrade-208700     | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:04 UTC | 21 Apr 24 21:11 UTC |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-043400        | running-upgrade-043400    | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:07 UTC | 21 Apr 24 21:09 UTC |
	| start   | -p cert-expiration-104900        | cert-expiration-104900    | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:08 UTC |                     |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h          |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| start   | -p cert-options-338400           | cert-options-338400       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:09 UTC |                     |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1        |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15    |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost      |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| ssh     | docker-flags-064200 ssh          | docker-flags-064200       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:09 UTC | 21 Apr 24 21:09 UTC |
	|         | sudo systemctl show docker       |                           |                   |         |                     |                     |
	|         | --property=Environment           |                           |                   |         |                     |                     |
	|         | --no-pager                       |                           |                   |         |                     |                     |
	| ssh     | docker-flags-064200 ssh          | docker-flags-064200       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:09 UTC | 21 Apr 24 21:09 UTC |
	|         | sudo systemctl show docker       |                           |                   |         |                     |                     |
	|         | --property=ExecStart             |                           |                   |         |                     |                     |
	|         | --no-pager                       |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-064200           | docker-flags-064200       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:09 UTC | 21 Apr 24 21:10 UTC |
	| start   | -p stopped-upgrade-603200        | minikube                  | minikube6\jenkins | v1.26.0 | 21 Apr 24 21:10 GMT |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv               |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-208700     | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:11 UTC |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0     |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-208700     | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:11 UTC |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 21:11:13
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 21:11:13.617736    7172 out.go:291] Setting OutFile to fd 1804 ...
	I0421 21:11:13.618744    7172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 21:11:13.618744    7172 out.go:304] Setting ErrFile to fd 1912...
	I0421 21:11:13.618744    7172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 21:11:13.645112    7172 out.go:298] Setting JSON to false
	I0421 21:11:13.649428    7172 start.go:129] hostinfo: {"hostname":"minikube6","uptime":19748,"bootTime":1713714124,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 21:11:13.649428    7172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 21:11:13.652416    7172 out.go:177] * [kubernetes-upgrade-208700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 21:11:13.656486    7172 notify.go:220] Checking for updates...
	I0421 21:11:13.658501    7172 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 21:11:13.660903    7172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 21:11:13.663933    7172 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 21:11:13.666967    7172 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 21:11:13.669735    7172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 21:11:10.603368     804 main.go:141] libmachine: [stdout =====>] : 172.27.199.208
	
	I0421 21:11:10.603368     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:10.611051     804 main.go:141] libmachine: Using SSH client type: native
	I0421 21:11:10.611720     804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.208 22 <nil> <nil>}
	I0421 21:11:10.611720     804 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 21:11:10.751920     804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713733870.744086767
	
	I0421 21:11:10.751920     804 fix.go:216] guest clock: 1713733870.744086767
	I0421 21:11:10.751920     804 fix.go:229] Guest: 2024-04-21 21:11:10.744086767 +0000 UTC Remote: 2024-04-21 21:11:05.4664656 +0000 UTC m=+126.164251801 (delta=5.277621167s)
	I0421 21:11:10.751920     804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-104900 ).state
	I0421 21:11:12.970362     804 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:11:12.970362     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:12.970362     804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-104900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:11:15.943482    1388 start.go:364] duration metric: took 1m57.6484218s to acquireMachinesLock for "cert-options-338400"
	I0421 21:11:15.943482    1388 start.go:93] Provisioning new machine with config: &{Name:cert-options-338400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:cert-options-338400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 21:11:15.944203    1388 start.go:125] createHost starting for "" (driver="hyperv")
	I0421 21:11:15.951141    1388 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0421 21:11:15.951942    1388 start.go:159] libmachine.API.Create for "cert-options-338400" (driver="hyperv")
	I0421 21:11:15.951942    1388 client.go:168] LocalClient.Create starting
	I0421 21:11:15.952416    1388 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 21:11:15.952416    1388 main.go:141] libmachine: Decoding PEM data...
	I0421 21:11:15.952416    1388 main.go:141] libmachine: Parsing certificate...
	I0421 21:11:15.952416    1388 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 21:11:15.952416    1388 main.go:141] libmachine: Decoding PEM data...
	I0421 21:11:15.952416    1388 main.go:141] libmachine: Parsing certificate...
	I0421 21:11:15.953602    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 21:11:13.672774    7172 config.go:182] Loaded profile config "kubernetes-upgrade-208700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 21:11:13.674122    7172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 21:11:15.785122     804 main.go:141] libmachine: [stdout =====>] : 172.27.199.208
	
	I0421 21:11:15.785122     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:15.791675     804 main.go:141] libmachine: Using SSH client type: native
	I0421 21:11:15.791675     804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.208 22 <nil> <nil>}
	I0421 21:11:15.791675     804 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713733870
	I0421 21:11:15.943235     804 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 21:11:10 UTC 2024
	
	I0421 21:11:15.943235     804 fix.go:236] clock set: Sun Apr 21 21:11:10 UTC 2024
	 (err=<nil>)
	I0421 21:11:15.943235     804 start.go:83] releasing machines lock for "cert-expiration-104900", held for 1m5.0949543s
	I0421 21:11:15.943482     804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-104900 ).state
	I0421 21:11:18.855020     804 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:11:18.855020     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:18.855020     804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-104900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:11:19.949173    7172 out.go:177] * Using the hyperv driver based on existing profile
	I0421 21:11:19.955682    7172 start.go:297] selected driver: hyperv
	I0421 21:11:19.955682    7172 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-208700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:kubernetes-upgrade-208700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.193.155 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 21:11:19.955682    7172 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 21:11:20.012702    7172 cni.go:84] Creating CNI manager for ""
	I0421 21:11:20.012784    7172 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 21:11:20.012986    7172 start.go:340] cluster config:
	{Name:kubernetes-upgrade-208700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-208700 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.193.155 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 21:11:20.013333    7172 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 21:11:20.019364    7172 out.go:177] * Starting "kubernetes-upgrade-208700" primary control-plane node in "kubernetes-upgrade-208700" cluster
	I0421 21:11:18.238436    1388 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 21:11:18.238670    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:18.238768    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 21:11:20.199266    1388 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 21:11:20.199266    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:20.199266    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 21:11:21.832044    1388 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 21:11:21.832204    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:21.832303    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 21:11:20.041456    7172 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 21:11:20.042576    7172 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 21:11:20.042695    7172 cache.go:56] Caching tarball of preloaded images
	I0421 21:11:20.043051    7172 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 21:11:20.043051    7172 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 21:11:20.043051    7172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-208700\config.json ...
	I0421 21:11:20.046570    7172 start.go:360] acquireMachinesLock for kubernetes-upgrade-208700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 21:11:21.592045     804 main.go:141] libmachine: [stdout =====>] : 172.27.199.208
	
	I0421 21:11:21.592045     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:21.596896     804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 21:11:21.597093     804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-104900 ).state
	I0421 21:11:21.608222     804 ssh_runner.go:195] Run: cat /version.json
	I0421 21:11:21.608222     804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-104900 ).state
	I0421 21:11:24.537532     804 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:11:24.537532     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:24.537622     804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-104900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:11:24.577651     804 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:11:24.577651     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:24.577651     804 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-104900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:11:25.927014    1388 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 21:11:25.927014    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:25.929337    1388 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 21:11:26.413481    1388 main.go:141] libmachine: Creating SSH key...
	I0421 21:11:26.529081    1388 main.go:141] libmachine: Creating VM...
	I0421 21:11:26.529081    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 21:11:27.368667     804 main.go:141] libmachine: [stdout =====>] : 172.27.199.208
	
	I0421 21:11:27.368667     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:27.369527     804 sshutil.go:53] new ssh client: &{IP:172.27.199.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-104900\id_rsa Username:docker}
	I0421 21:11:27.394488     804 main.go:141] libmachine: [stdout =====>] : 172.27.199.208
	
	I0421 21:11:27.394488     804 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:27.395249     804 sshutil.go:53] new ssh client: &{IP:172.27.199.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-104900\id_rsa Username:docker}
	I0421 21:11:27.479226     804 ssh_runner.go:235] Completed: cat /version.json: (5.870961s)
	I0421 21:11:27.495566     804 ssh_runner.go:195] Run: systemctl --version
	I0421 21:11:29.509782     804 ssh_runner.go:235] Completed: systemctl --version: (2.0142022s)
	I0421 21:11:29.509782     804 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.9128293s)
	W0421 21:11:29.509782     804 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0421 21:11:29.509782     804 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0421 21:11:29.509782     804 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0421 21:11:29.530068     804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 21:11:29.540335     804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 21:11:29.554283     804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 21:11:29.575863     804 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0421 21:11:29.575954     804 start.go:494] detecting cgroup driver to use...
	I0421 21:11:29.576131     804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 21:11:29.713548    1388 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 21:11:29.713548    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:29.714251    1388 main.go:141] libmachine: Using switch "Default Switch"
	I0421 21:11:29.714292    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 21:11:31.580957    1388 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 21:11:31.580957    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:31.580957    1388 main.go:141] libmachine: Creating VHD
	I0421 21:11:31.581662    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-338400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 21:11:29.648652     804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 21:11:29.687610     804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 21:11:29.709574     804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 21:11:29.724583     804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 21:11:29.762223     804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 21:11:29.810953     804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 21:11:29.850192     804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 21:11:29.890616     804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 21:11:29.924851     804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 21:11:29.960009     804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 21:11:29.997801     804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 21:11:30.034079     804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 21:11:30.073033     804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 21:11:30.110742     804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 21:11:30.418714     804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 21:11:30.459455     804 start.go:494] detecting cgroup driver to use...
	I0421 21:11:30.477207     804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 21:11:30.533117     804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 21:11:30.577174     804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 21:11:30.643943     804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 21:11:30.689403     804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 21:11:30.720801     804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 21:11:30.774392     804 ssh_runner.go:195] Run: which cri-dockerd
	I0421 21:11:30.795480     804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 21:11:30.815404     804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 21:11:30.869741     804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 21:11:31.199656     804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 21:11:31.499188     804 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 21:11:31.499401     804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 21:11:31.550513     804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 21:11:31.901660     804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 21:11:35.412843    1388 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-338400\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C30DA933-0F8F-404D-8DCE-1052CF2B4E29
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 21:11:35.412843    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:35.412843    1388 main.go:141] libmachine: Writing magic tar header
	I0421 21:11:35.412843    1388 main.go:141] libmachine: Writing SSH key tar header
	I0421 21:11:35.423050    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-338400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-338400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 21:11:38.668867    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:11:38.668867    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:38.668867    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-338400\disk.vhd' -SizeBytes 20000MB
	I0421 21:11:41.274104    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:11:41.274104    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:41.274104    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM cert-options-338400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-338400' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0421 21:11:45.050594    1388 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	cert-options-338400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 21:11:45.050594    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:45.050594    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName cert-options-338400 -DynamicMemoryEnabled $false
	I0421 21:11:47.411923    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:11:47.411923    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:47.412457    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor cert-options-338400 -Count 2
	I0421 21:11:49.631991    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:11:49.631991    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:49.632160    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName cert-options-338400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-338400\boot2docker.iso'
	I0421 21:11:52.215111    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:11:52.215111    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:52.215598    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName cert-options-338400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-338400\disk.vhd'
	I0421 21:11:55.009419    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:11:55.009419    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:55.009419    1388 main.go:141] libmachine: Starting VM...
	I0421 21:11:55.009419    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM cert-options-338400
	I0421 21:11:58.254978    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:11:58.254978    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:11:58.254978    1388 main.go:141] libmachine: Waiting for host to start...
	I0421 21:11:58.255598    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:00.564925    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:00.564925    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:00.564925    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:03.211613    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:12:03.211613    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:04.216467    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:06.473745    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:06.473745    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:06.474548    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:09.126212    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:12:09.126212    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:10.139088    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:12.401803    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:12.401803    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:12.401927    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:15.061819    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:12:15.062222    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:16.065904    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:18.332895    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:18.332895    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:18.332895    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:20.958137    1388 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:12:20.958137    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:21.965195    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:24.235497    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:24.235497    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:24.235755    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:27.090489    1388 main.go:141] libmachine: [stdout =====>] : 172.27.195.132
	
	I0421 21:12:27.090489    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:27.090702    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:29.293562    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:29.293562    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:29.293562    1388 machine.go:94] provisionDockerMachine start ...
	I0421 21:12:29.294651    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:31.564975    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:31.565343    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:31.565343    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:34.230352    1388 main.go:141] libmachine: [stdout =====>] : 172.27.195.132
	
	I0421 21:12:34.230352    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:34.237295    1388 main.go:141] libmachine: Using SSH client type: native
	I0421 21:12:34.251039    1388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.132 22 <nil> <nil>}
	I0421 21:12:34.251039    1388 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 21:12:34.384062    1388 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 21:12:34.384172    1388 buildroot.go:166] provisioning hostname "cert-options-338400"
	I0421 21:12:34.384172    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:36.569730    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:36.569730    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:36.570508    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:39.240911    1388 main.go:141] libmachine: [stdout =====>] : 172.27.195.132
	
	I0421 21:12:39.240969    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:39.247741    1388 main.go:141] libmachine: Using SSH client type: native
	I0421 21:12:39.248282    1388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.132 22 <nil> <nil>}
	I0421 21:12:39.248282    1388 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-338400 && echo "cert-options-338400" | sudo tee /etc/hostname
	I0421 21:12:39.423363    1388 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-338400
	
	I0421 21:12:39.423363    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:41.614326    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:41.614326    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:41.614381    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:43.314082     804 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4119085s)
	I0421 21:12:43.328583     804 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0421 21:12:43.398568     804 out.go:177] 
	W0421 21:12:43.403368     804 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 21 21:04:35 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:04:35 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:35.995067321Z" level=info msg="Starting up"
	Apr 21 21:04:35 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:35.997095700Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:36.002170348Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=672
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.038577088Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073401644Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073493643Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073571043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073690841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073809040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.073946339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074176637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074280935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074351035Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074367035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074466634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.074834230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079442485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079584783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079753381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.079859980Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080052778Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080206577Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.080354676Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.105453528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.105909923Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106080921Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106213420Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106239320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.106426518Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107033112Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107277310Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107412308Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107434508Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107452308Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107468708Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107484408Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107501407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107519107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107542107Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107579307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107595507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107618106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107634506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107648606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107663106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107679306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107697505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107724205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107739305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107754505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107774405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107788805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.107904803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108031202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108056302Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108080702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108094902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108113801Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108186701Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108227400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108243100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108254900Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108431598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108514297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108537097Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.108811094Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109069492Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109207591Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:04:36 cert-expiration-104900 dockerd[672]: time="2024-04-21T21:04:36.109236590Z" level=info msg="containerd successfully booted in 0.072042s"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.097237691Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.133628454Z" level=info msg="Loading containers: start."
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.449384281Z" level=info msg="Loading containers: done."
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.476864858Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.482641311Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.610585969Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:04:37 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:04:37 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:04:37.613225747Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:10 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.095739570Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097376572Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097537272Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097680772Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:05:10 cert-expiration-104900 dockerd[666]: time="2024-04-21T21:05:10.097715572Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:05:11 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.179788181Z" level=info msg="Starting up"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.181143682Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:11.187188689Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1031
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.218459824Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263633374Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263773874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263942175Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.263984775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264034475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264052175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264263475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264381375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264407475Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264420975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264462275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.264621675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268378380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268430380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268593880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268702380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268737780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268759880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.268773380Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269055780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269181181Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269216781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269235581Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269252681Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269320981Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269593481Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269701581Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269722981Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269739681Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269755781Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269777581Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269911581Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269949981Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.269970581Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270095882Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270127682Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270143982Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270173782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270198882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270216082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270234882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270250682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270266082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270279782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270299482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270322182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270341182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270356982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270371582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270386482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270406382Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270432182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270448382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270479882Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270563182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270602582Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270619182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270632282Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270701082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270721482Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.270787282Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271399083Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271634583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271719883Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:05:11 cert-expiration-104900 dockerd[1031]: time="2024-04-21T21:05:11.271763783Z" level=info msg="containerd successfully booted in 0.054391s"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.239054364Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.260426288Z" level=info msg="Loading containers: start."
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.472425724Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.569331433Z" level=info msg="Loading containers: done."
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.598001665Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.598148065Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.653884527Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:05:12 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:05:12 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:12.655959229Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:25 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.619424508Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.623929613Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624073513Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624129213Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:05:25 cert-expiration-104900 dockerd[1025]: time="2024-04-21T21:05:25.624159513Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:05:26 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.706185522Z" level=info msg="Starting up"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.707570823Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:26.709504626Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1336
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.748674569Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780570405Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780647905Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780709405Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780873205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780924005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.780965605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781172506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781283806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781308106Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781322006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781353706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.781580306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785359810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785407510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785550610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785590711Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785655211Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785682011Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.785694911Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786039511Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786166611Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786232411Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786256711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786280811Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.786350011Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787328812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787528713Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787620813Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787687813Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.787750513Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789533015Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789594815Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789616415Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789639315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789657215Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789672715Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789686815Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789710315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789726715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789743815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789856415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789884415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789900715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789915015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789938115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789956015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789974615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.789988915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790003215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790018315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790036816Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790079216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790129116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790154116Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790198916Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790242316Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790259416Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790278316Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790366416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790408816Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790423916Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790840916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.790962717Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.791043317Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:05:26 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:26.791065817Z" level=info msg="containerd successfully booted in 0.043136s"
	Apr 21 21:05:27 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:27.760560100Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.609859948Z" level=info msg="Loading containers: start."
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.816500979Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.906680780Z" level=info msg="Loading containers: done."
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.930907807Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.931054107Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.976585958Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:05:28 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:05:28.976666158Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:05:28 cert-expiration-104900 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281763599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281889999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.281906499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.282004499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.299976430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.300328031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.300419731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.302504135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393307490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393786091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.393893091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.394254592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.409093218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.410697420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.413891626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.414390727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947040941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947175241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.947209542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:39 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:39.948217843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077141962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077323562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077348762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.077540762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.110319517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.111301519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.111889920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.112281121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.126922645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.127331946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.127639147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:05:40 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:05:40.128539748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.057659703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.059092605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.059590805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.066668914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178063354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178380854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178420754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.178606854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610555295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610690696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610707996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.610872496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.843436987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.844559189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.855781803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.856658804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.864785414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.865602115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.871949623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:02 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:02.872123423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549111764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549374665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.549474765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:03 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:03.550353867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:06:33.247244254Z" level=info msg="ignoring event" container=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.248202360Z" level=info msg="shim disconnected" id=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 namespace=moby
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.248900365Z" level=warning msg="cleaning up after shim disconnected" id=bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6 namespace=moby
	Apr 21 21:06:33 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:33.250534776Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204432664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204751167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.204916268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:06:34 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:06:34.206642079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:31 cert-expiration-104900 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:11:31 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:31.933256937Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.186342096Z" level=info msg="shim disconnected" id=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.187600286Z" level=warning msg="cleaning up after shim disconnected" id=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.187847284Z" level=info msg="ignoring event" container=bd92feb12aca806227f01ed1846439a870d2b17cef546d27c3b7b5b7a98c1149 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.188058082Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.255949235Z" level=info msg="ignoring event" container=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257404823Z" level=info msg="shim disconnected" id=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257736121Z" level=warning msg="cleaning up after shim disconnected" id=c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.257911919Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.264762864Z" level=info msg="shim disconnected" id=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.264943062Z" level=warning msg="cleaning up after shim disconnected" id=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.265078361Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.266193852Z" level=info msg="ignoring event" container=b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.268712732Z" level=info msg="ignoring event" container=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269227928Z" level=info msg="shim disconnected" id=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269364127Z" level=warning msg="cleaning up after shim disconnected" id=2addaa9c268727e81999f6751a3e19b6adeb3c2be9069aed705b68b47d11f9ac namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.269588325Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274409586Z" level=info msg="shim disconnected" id=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.274434286Z" level=info msg="ignoring event" container=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274478286Z" level=warning msg="cleaning up after shim disconnected" id=fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.274496785Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.281792927Z" level=info msg="ignoring event" container=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.282136724Z" level=info msg="shim disconnected" id=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.283055216Z" level=info msg="ignoring event" container=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.302886057Z" level=warning msg="cleaning up after shim disconnected" id=5c9c6334b219a047b4c4ef31c159c7bc125e7790ca7229cc2296e9107d3540b7 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.304848741Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316457047Z" level=info msg="shim disconnected" id=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316582446Z" level=warning msg="cleaning up after shim disconnected" id=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.316680445Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.284291306Z" level=info msg="shim disconnected" id=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.321162409Z" level=info msg="ignoring event" container=3182e24f5339be1d559e9f835d51b29d8685e1b3d974c68042e72fa7e9de11f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.321798204Z" level=info msg="ignoring event" container=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.324036986Z" level=warning msg="cleaning up after shim disconnected" id=cf813276c712b8c9365bbec51a6c1f994975839be830d5f5e3a3c2663f8dfea2 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.324217384Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.332933614Z" level=info msg="ignoring event" container=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.333133213Z" level=info msg="ignoring event" container=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:32.333476910Z" level=info msg="ignoring event" container=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.322438799Z" level=info msg="shim disconnected" id=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343857926Z" level=warning msg="cleaning up after shim disconnected" id=61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343896826Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345006117Z" level=info msg="shim disconnected" id=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345284215Z" level=warning msg="cleaning up after shim disconnected" id=3ee07b18ff653202aebb50ee88bc005e5ef34e3e381817a4c896940541aeeb57 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.345610212Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.343772027Z" level=info msg="shim disconnected" id=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.363712166Z" level=warning msg="cleaning up after shim disconnected" id=358d9ff05c997bf6d24e0b4ff18aaaa1c32dcf21a37128f4180b749d478b187a namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.363864065Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.365833449Z" level=info msg="shim disconnected" id=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.366107647Z" level=warning msg="cleaning up after shim disconnected" id=6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5 namespace=moby
	Apr 21 21:11:32 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:32.366225546Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:37.095380412Z" level=info msg="ignoring event" container=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095781808Z" level=info msg="shim disconnected" id=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095841908Z" level=warning msg="cleaning up after shim disconnected" id=13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903 namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.095853408Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:37 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:37.129671635Z" level=warning msg="cleanup warnings time=\"2024-04-21T21:11:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.067623447Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.116791123Z" level=info msg="shim disconnected" id=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.117447089Z" level=warning msg="cleaning up after shim disconnected" id=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.117864257Z" level=info msg="ignoring event" container=18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1336]: time="2024-04-21T21:11:42.118081745Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.195261646Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196084979Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196537662Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:11:42 cert-expiration-104900 dockerd[1330]: time="2024-04-21T21:11:42.196602388Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: docker.service: Consumed 10.039s CPU time.
	Apr 21 21:11:43 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:11:43 cert-expiration-104900 dockerd[4632]: time="2024-04-21T21:11:43.282728411Z" level=info msg="Starting up"
	Apr 21 21:12:43 cert-expiration-104900 dockerd[4632]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 21 21:12:43 cert-expiration-104900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0421 21:12:43.404616     804 out.go:239] * 
	W0421 21:12:43.405936     804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 21:12:43.411146     804 out.go:177] 
	I0421 21:12:44.342518    1388 main.go:141] libmachine: [stdout =====>] : 172.27.195.132
	
	I0421 21:12:44.342518    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:44.348899    1388 main.go:141] libmachine: Using SSH client type: native
	I0421 21:12:44.348899    1388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.132 22 <nil> <nil>}
	I0421 21:12:44.348899    1388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-338400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-338400/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-338400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 21:12:44.501736    1388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 21:12:44.501736    1388 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 21:12:44.501736    1388 buildroot.go:174] setting up certificates
	I0421 21:12:44.501736    1388 provision.go:84] configureAuth start
	I0421 21:12:44.501736    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:46.750821    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:46.750821    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:46.751040    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	I0421 21:12:49.513393    1388 main.go:141] libmachine: [stdout =====>] : 172.27.195.132
	
	I0421 21:12:49.513644    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:49.513819    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-338400 ).state
	I0421 21:12:51.761532    1388 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:12:51.761532    1388 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:12:51.761532    1388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-338400 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Apr 21 21:12:43 cert-expiration-104900 dockerd[4835]: time="2024-04-21T21:12:43.513713461Z" level=info msg="Starting up"
	Apr 21 21:13:43 cert-expiration-104900 dockerd[4835]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error getting RW layer size for container ID '13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '13597eb9e05a78f06f5f95838df9dfb9dbe19b06735928a7f3e9165ff1a2e903'"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error getting RW layer size for container ID '61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '61e307cc70b7ded7c7dcc85dc1cbb0c26f805168c86179d3aaab39205c3f2745'"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error getting RW layer size for container ID 'bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bff72ca549b3047d5d78ab39c8afe0ce3b7c12b9e3391ebf434f900e884886f6'"
	Apr 21 21:13:43 cert-expiration-104900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error getting RW layer size for container ID 'fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fae2251b13749868699526a6e781470ea65d20f5e7bd4414244ca3226a9d3172'"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error getting RW layer size for container ID 'b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b5d7008ee77655c5dce65c67f1a14978300c5d84ba6e96eae235efeb9893a836'"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error getting RW layer size for container ID '18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '18a54153f75c8dd4ba28737875115c64df433a8e273e9177ca54a8e45cde1578'"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error getting RW layer size for container ID '6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:13:43 cert-expiration-104900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6582ea919d32a17f94a4e8655057f2205d5ffe3407e57257649c148fe56fbbe5'"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="error getting RW layer size for container ID 'c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:13:43 cert-expiration-104900 cri-dockerd[1235]: time="2024-04-21T21:13:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c3ed6551871c45840100a67be9a57ee814f27261ecbafe2ac602a1dab0ce5438'"
	Apr 21 21:13:43 cert-expiration-104900 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 21 21:13:43 cert-expiration-104900 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 21 21:13:43 cert-expiration-104900 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:13:43 cert-expiration-104900 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-21T21:13:45Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.203225] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[Apr21 21:05] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.122784] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.656771] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.222437] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.252742] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +2.938437] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.224094] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.234981] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.305875] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[ +11.820614] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.125511] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.853163] systemd-fstab-generator[1521]: Ignoring "noauto" option for root device
	[  +8.157384] systemd-fstab-generator[1725]: Ignoring "noauto" option for root device
	[  +0.125822] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.580215] systemd-fstab-generator[2129]: Ignoring "noauto" option for root device
	[  +0.157950] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.441610] systemd-fstab-generator[2194]: Ignoring "noauto" option for root device
	[Apr21 21:06] kauditd_printk_skb: 34 callbacks suppressed
	[ +31.067802] kauditd_printk_skb: 59 callbacks suppressed
	[Apr21 21:11] systemd-fstab-generator[4165]: Ignoring "noauto" option for root device
	[  +0.759434] systemd-fstab-generator[4203]: Ignoring "noauto" option for root device
	[  +0.341636] systemd-fstab-generator[4215]: Ignoring "noauto" option for root device
	[  +0.366547] systemd-fstab-generator[4229]: Ignoring "noauto" option for root device
	[  +5.402927] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 21:14:44 up 11 min,  0 users,  load average: 0.09, 0.41, 0.26
	Linux cert-expiration-104900 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 21 21:14:37 cert-expiration-104900 kubelet[2137]: E0421 21:14:37.688137    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-104900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-104900?timeout=10s\": dial tcp 172.27.199.208:8443: connect: connection refused"
	Apr 21 21:14:37 cert-expiration-104900 kubelet[2137]: E0421 21:14:37.689716    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-104900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-104900?timeout=10s\": dial tcp 172.27.199.208:8443: connect: connection refused"
	Apr 21 21:14:37 cert-expiration-104900 kubelet[2137]: E0421 21:14:37.690748    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-104900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-104900?timeout=10s\": dial tcp 172.27.199.208:8443: connect: connection refused"
	Apr 21 21:14:37 cert-expiration-104900 kubelet[2137]: E0421 21:14:37.691746    2137 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-104900\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-104900?timeout=10s\": dial tcp 172.27.199.208:8443: connect: connection refused"
	Apr 21 21:14:37 cert-expiration-104900 kubelet[2137]: E0421 21:14:37.691860    2137 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 21 21:14:37 cert-expiration-104900 kubelet[2137]: I0421 21:14:37.782454    2137 status_manager.go:853] "Failed to get status for pod" podUID="16a4586f-2a59-456b-85a9-8a4968cf921f" pod="kube-system/coredns-7db6d8ff4d-c9x6h" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-c9x6h\": dial tcp 172.27.199.208:8443: connect: connection refused"
	Apr 21 21:14:37 cert-expiration-104900 kubelet[2137]: I0421 21:14:37.783810    2137 status_manager.go:853] "Failed to get status for pod" podUID="d45f0f06caf7dc55b251046eee0bb2c1" pod="kube-system/kube-apiserver-cert-expiration-104900" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-cert-expiration-104900\": dial tcp 172.27.199.208:8443: connect: connection refused"
	Apr 21 21:14:37 cert-expiration-104900 kubelet[2137]: E0421 21:14:37.872108    2137 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-104900?timeout=10s\": dial tcp 172.27.199.208:8443: connect: connection refused" interval="7s"
	Apr 21 21:14:39 cert-expiration-104900 kubelet[2137]: E0421 21:14:39.657858    2137 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m8.662393171s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.941863    2137 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 172.27.199.208:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-cert-expiration-104900.17c867e5360c0bc0  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-cert-expiration-104900,UID:d45f0f06caf7dc55b251046eee0bb2c1,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.27.199.208:8443/readyz\": dial tcp 172.27.199.208:8443: connect: connection refused,Source:EventSource{Component:kubelet,Host:cert-expiration-104900,},FirstTimestamp:2024-04-21 21:11:32.366404544 +0000 UTC m=+344.872218892,LastTimestamp:2024-04-21 21:11:32.366404
544 +0000 UTC m=+344.872218892,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:cert-expiration-104900,}"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.983192    2137 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.983352    2137 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.983762    2137 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.983797    2137 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.983815    2137 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.983862    2137 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.983891    2137 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.985130    2137 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.985321    2137 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.986099    2137 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.986434    2137 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.986467    2137 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.986493    2137 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: E0421 21:14:43.986571    2137 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:14:43 cert-expiration-104900 kubelet[2137]: I0421 21:14:43.986587    2137 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:12:56.532027    6088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0421 21:13:43.559565    6088 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:13:43.597199    6088 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:13:43.633735    6088 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:13:43.665814    6088 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:13:43.708528    6088 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:13:43.745782    6088 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:13:43.780434    6088 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:13:43.814430    6088 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-104900 -n cert-expiration-104900
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-104900 -n cert-expiration-104900: exit status 2 (14.3765223s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:14:44.800952    5732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-104900" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-104900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-104900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-104900: (1m4.7628797s)
--- FAIL: TestCertExpiration (1126.37s)

                                                
                                    
x
+
TestErrorSpam/setup (204.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-389800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 --driver=hyperv
E0421 18:35:36.876853   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:36.891745   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:36.907922   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:36.938544   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:36.985634   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:37.078359   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:37.249754   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:37.583362   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:38.233267   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:39.515865   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:42.078176   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:47.200076   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:35:57.442383   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:36:17.924579   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:36:58.893488   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:38:20.817362   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-389800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 --driver=hyperv: (3m24.5259968s)
error_spam_test.go:96: unexpected stderr: "W0421 18:35:12.765999    8432 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-389800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=18702
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-389800" primary control-plane node in "nospam-389800" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-389800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0421 18:35:12.765999    8432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (204.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (35.08s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-808300 -n functional-808300: (12.5062241s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs -n 25: (8.921101s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-389800 --log_dir                                     | nospam-389800     | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:39 UTC | 21 Apr 24 18:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-389800 --log_dir                                     | nospam-389800     | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:39 UTC | 21 Apr 24 18:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-389800 --log_dir                                     | nospam-389800     | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:40 UTC | 21 Apr 24 18:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-389800 --log_dir                                     | nospam-389800     | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:40 UTC | 21 Apr 24 18:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-389800 --log_dir                                     | nospam-389800     | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:40 UTC | 21 Apr 24 18:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-389800 --log_dir                                     | nospam-389800     | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:40 UTC | 21 Apr 24 18:41 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-389800 --log_dir                                     | nospam-389800     | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:41 UTC | 21 Apr 24 18:41 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-389800                                            | nospam-389800     | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:41 UTC | 21 Apr 24 18:41 UTC |
	| start   | -p functional-808300                                        | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:41 UTC | 21 Apr 24 18:45 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-808300                                        | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:45 UTC | 21 Apr 24 18:47 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                 | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:47 UTC | 21 Apr 24 18:48 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                 | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC | 21 Apr 24 18:48 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                 | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC | 21 Apr 24 18:48 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache add                                 | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC | 21 Apr 24 18:48 UTC |
	|         | minikube-local-cache-test:functional-808300                 |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache delete                              | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC | 21 Apr 24 18:48 UTC |
	|         | minikube-local-cache-test:functional-808300                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC | 21 Apr 24 18:48 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC | 21 Apr 24 18:48 UTC |
	| ssh     | functional-808300 ssh sudo                                  | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC | 21 Apr 24 18:48 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-808300                                           | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC | 21 Apr 24 18:48 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-808300 ssh                                       | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:48 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-808300 cache reload                              | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:49 UTC | 21 Apr 24 18:49 UTC |
	| ssh     | functional-808300 ssh                                       | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:49 UTC | 21 Apr 24 18:49 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:49 UTC | 21 Apr 24 18:49 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:49 UTC | 21 Apr 24 18:49 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-808300 kubectl --                                | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:49 UTC | 21 Apr 24 18:49 UTC |
	|         | --context functional-808300                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:45:45
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:45:45.469741   14120 out.go:291] Setting OutFile to fd 804 ...
	I0421 18:45:45.470404   14120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:45:45.470404   14120 out.go:304] Setting ErrFile to fd 932...
	I0421 18:45:45.470404   14120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:45:45.497632   14120 out.go:298] Setting JSON to false
	I0421 18:45:45.501465   14120 start.go:129] hostinfo: {"hostname":"minikube6","uptime":11020,"bootTime":1713714124,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 18:45:45.501465   14120 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 18:45:45.505614   14120 out.go:177] * [functional-808300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 18:45:45.511634   14120 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:45:45.511634   14120 notify.go:220] Checking for updates...
	I0421 18:45:45.514834   14120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:45:45.517509   14120 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 18:45:45.520257   14120 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:45:45.522499   14120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:45:45.526841   14120 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 18:45:45.526841   14120 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:45:51.030326   14120 out.go:177] * Using the hyperv driver based on existing profile
	I0421 18:45:51.034149   14120 start.go:297] selected driver: hyperv
	I0421 18:45:51.034149   14120 start.go:901] validating driver "hyperv" against &{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.199.19 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:45:51.034149   14120 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 18:45:51.098089   14120 cni.go:84] Creating CNI manager for ""
	I0421 18:45:51.098221   14120 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 18:45:51.098438   14120 start.go:340] cluster config:
	{Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.199.19 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:45:51.099010   14120 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:45:51.104112   14120 out.go:177] * Starting "functional-808300" primary control-plane node in "functional-808300" cluster
	I0421 18:45:51.106646   14120 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 18:45:51.106646   14120 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 18:45:51.106646   14120 cache.go:56] Caching tarball of preloaded images
	I0421 18:45:51.107325   14120 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 18:45:51.107430   14120 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 18:45:51.107430   14120 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\config.json ...
	I0421 18:45:51.110314   14120 start.go:360] acquireMachinesLock for functional-808300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 18:45:51.110314   14120 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-808300"
	I0421 18:45:51.110314   14120 start.go:96] Skipping create...Using existing machine configuration
	I0421 18:45:51.110314   14120 fix.go:54] fixHost starting: 
	I0421 18:45:51.110314   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:45:53.911344   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:45:53.911344   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:45:53.911628   14120 fix.go:112] recreateIfNeeded on functional-808300: state=Running err=<nil>
	W0421 18:45:53.911628   14120 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 18:45:53.915886   14120 out.go:177] * Updating the running hyperv "functional-808300" VM ...
	I0421 18:45:53.918823   14120 machine.go:94] provisionDockerMachine start ...
	I0421 18:45:53.919047   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:45:56.136147   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:45:56.136147   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:45:56.136285   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:45:58.792460   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:45:58.792693   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:45:58.800879   14120 main.go:141] libmachine: Using SSH client type: native
	I0421 18:45:58.801578   14120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.19 22 <nil> <nil>}
	I0421 18:45:58.801578   14120 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 18:45:58.947058   14120 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0421 18:45:58.947598   14120 buildroot.go:166] provisioning hostname "functional-808300"
	I0421 18:45:58.947684   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:01.127269   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:01.127269   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:01.127269   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:03.778437   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:03.778437   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:03.784607   14120 main.go:141] libmachine: Using SSH client type: native
	I0421 18:46:03.785125   14120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.19 22 <nil> <nil>}
	I0421 18:46:03.785125   14120 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808300 && echo "functional-808300" | sudo tee /etc/hostname
	I0421 18:46:03.961119   14120 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808300
	
	I0421 18:46:03.961217   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:06.125081   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:06.125239   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:06.125239   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:08.758372   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:08.758372   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:08.766391   14120 main.go:141] libmachine: Using SSH client type: native
	I0421 18:46:08.766580   14120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.19 22 <nil> <nil>}
	I0421 18:46:08.766580   14120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 18:46:08.911547   14120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:46:08.911704   14120 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 18:46:08.911704   14120 buildroot.go:174] setting up certificates
	I0421 18:46:08.911704   14120 provision.go:84] configureAuth start
	I0421 18:46:08.911704   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:11.131560   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:11.132433   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:11.132547   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:13.806384   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:13.806992   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:13.807084   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:15.971745   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:15.972425   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:15.972528   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:18.590278   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:18.590278   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:18.590350   14120 provision.go:143] copyHostCerts
	I0421 18:46:18.590350   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 18:46:18.590350   14120 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 18:46:18.590350   14120 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 18:46:18.590976   14120 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 18:46:18.592617   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 18:46:18.592617   14120 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 18:46:18.592617   14120 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 18:46:18.593244   14120 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 18:46:18.594067   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 18:46:18.594067   14120 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 18:46:18.594067   14120 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 18:46:18.594792   14120 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 18:46:18.595978   14120 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-808300 san=[127.0.0.1 172.27.199.19 functional-808300 localhost minikube]
	I0421 18:46:19.105937   14120 provision.go:177] copyRemoteCerts
	I0421 18:46:19.118285   14120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 18:46:19.118285   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:21.276658   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:21.276866   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:21.276985   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:23.893170   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:23.894187   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:23.894684   14120 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0421 18:46:24.012256   14120 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8938767s)
	I0421 18:46:24.012354   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 18:46:24.013119   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 18:46:24.075588   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 18:46:24.076116   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0421 18:46:24.130335   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 18:46:24.131033   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 18:46:24.188208   14120 provision.go:87] duration metric: took 15.2763962s to configureAuth
	I0421 18:46:24.188318   14120 buildroot.go:189] setting minikube options for container-runtime
	I0421 18:46:24.188856   14120 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 18:46:24.188947   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:26.358094   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:26.358094   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:26.358094   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:28.962814   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:28.962814   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:28.969074   14120 main.go:141] libmachine: Using SSH client type: native
	I0421 18:46:28.969824   14120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.19 22 <nil> <nil>}
	I0421 18:46:28.969824   14120 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 18:46:29.109908   14120 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 18:46:29.109972   14120 buildroot.go:70] root file system type: tmpfs
	I0421 18:46:29.110188   14120 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 18:46:29.110249   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:31.260423   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:31.260423   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:31.260423   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:33.889861   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:33.889924   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:33.895349   14120 main.go:141] libmachine: Using SSH client type: native
	I0421 18:46:33.895891   14120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.19 22 <nil> <nil>}
	I0421 18:46:33.896045   14120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 18:46:34.065209   14120 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 18:46:34.065209   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:36.224784   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:36.225815   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:36.225883   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:38.827713   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:38.827713   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:38.834152   14120 main.go:141] libmachine: Using SSH client type: native
	I0421 18:46:38.835236   14120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.19 22 <nil> <nil>}
	I0421 18:46:38.835510   14120 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 18:46:39.010154   14120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 18:46:39.010154   14120 machine.go:97] duration metric: took 45.0908595s to provisionDockerMachine
	I0421 18:46:39.010154   14120 start.go:293] postStartSetup for "functional-808300" (driver="hyperv")
	I0421 18:46:39.010154   14120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 18:46:39.026361   14120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 18:46:39.026361   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:41.151626   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:41.151626   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:41.151626   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:43.725191   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:43.725191   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:43.725970   14120 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0421 18:46:43.836645   14120 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8102501s)
	I0421 18:46:43.851395   14120 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 18:46:43.858460   14120 command_runner.go:130] > NAME=Buildroot
	I0421 18:46:43.858687   14120 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 18:46:43.858792   14120 command_runner.go:130] > ID=buildroot
	I0421 18:46:43.858792   14120 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 18:46:43.858792   14120 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 18:46:43.858792   14120 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 18:46:43.858792   14120 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 18:46:43.859325   14120 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 18:46:43.860352   14120 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 18:46:43.860437   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 18:46:43.861975   14120 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\13800\hosts -> hosts in /etc/test/nested/copy/13800
	I0421 18:46:43.862030   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\13800\hosts -> /etc/test/nested/copy/13800/hosts
	I0421 18:46:43.875879   14120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13800
	I0421 18:46:43.896273   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 18:46:43.950703   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\13800\hosts --> /etc/test/nested/copy/13800/hosts (40 bytes)
	I0421 18:46:44.007320   14120 start.go:296] duration metric: took 4.9971305s for postStartSetup
	I0421 18:46:44.007320   14120 fix.go:56] duration metric: took 52.8966308s for fixHost
	I0421 18:46:44.007320   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:46.189070   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:46.189070   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:46.189070   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:48.840816   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:48.840816   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:48.847341   14120 main.go:141] libmachine: Using SSH client type: native
	I0421 18:46:48.847994   14120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.19 22 <nil> <nil>}
	I0421 18:46:48.847994   14120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 18:46:48.985298   14120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713725208.985367578
	
	I0421 18:46:48.985298   14120 fix.go:216] guest clock: 1713725208.985367578
	I0421 18:46:48.985298   14120 fix.go:229] Guest: 2024-04-21 18:46:48.985367578 +0000 UTC Remote: 2024-04-21 18:46:44.0073207 +0000 UTC m=+58.735981901 (delta=4.978046878s)
	I0421 18:46:48.985298   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:51.163362   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:51.163362   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:51.164115   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:53.875679   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:53.876335   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:53.882950   14120 main.go:141] libmachine: Using SSH client type: native
	I0421 18:46:53.883341   14120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.199.19 22 <nil> <nil>}
	I0421 18:46:53.883341   14120 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713725208
	I0421 18:46:54.048741   14120 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 18:46:48 UTC 2024
	
	I0421 18:46:54.048818   14120 fix.go:236] clock set: Sun Apr 21 18:46:48 UTC 2024
	 (err=<nil>)
	I0421 18:46:54.048818   14120 start.go:83] releasing machines lock for "functional-808300", held for 1m2.9380568s
	I0421 18:46:54.049188   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:56.373149   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:46:56.373459   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:56.373536   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:46:59.259381   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:46:59.260226   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:46:59.264429   14120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 18:46:59.264979   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:46:59.282306   14120 ssh_runner.go:195] Run: cat /version.json
	I0421 18:46:59.282306   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:47:01.529666   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:47:01.529666   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:01.529666   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:47:01.544533   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:47:01.545265   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:01.545265   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:47:04.258571   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:47:04.258571   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:04.259575   14120 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0421 18:47:04.291840   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:47:04.291840   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:04.292870   14120 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0421 18:47:04.367536   14120 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0421 18:47:04.367787   14120 ssh_runner.go:235] Completed: cat /version.json: (5.0853576s)
	I0421 18:47:04.383774   14120 ssh_runner.go:195] Run: systemctl --version
	I0421 18:47:04.441462   14120 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 18:47:04.442052   14120 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1775862s)
	I0421 18:47:04.442052   14120 command_runner.go:130] > systemd 252 (252)
	I0421 18:47:04.442194   14120 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0421 18:47:04.455281   14120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 18:47:04.464496   14120 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0421 18:47:04.465134   14120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 18:47:04.479489   14120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 18:47:04.499715   14120 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0421 18:47:04.499715   14120 start.go:494] detecting cgroup driver to use...
	I0421 18:47:04.500099   14120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:47:04.542249   14120 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0421 18:47:04.555794   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 18:47:04.596688   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 18:47:04.620210   14120 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 18:47:04.633482   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 18:47:04.670840   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 18:47:04.708687   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 18:47:04.744971   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 18:47:04.782207   14120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 18:47:04.825743   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 18:47:04.865229   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 18:47:04.901135   14120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 18:47:04.940399   14120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 18:47:04.964887   14120 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 18:47:04.980066   14120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 18:47:05.015958   14120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:47:05.334306   14120 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 18:47:05.370220   14120 start.go:494] detecting cgroup driver to use...
	I0421 18:47:05.385472   14120 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 18:47:05.410964   14120 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0421 18:47:05.411886   14120 command_runner.go:130] > [Unit]
	I0421 18:47:05.412104   14120 command_runner.go:130] > Description=Docker Application Container Engine
	I0421 18:47:05.412104   14120 command_runner.go:130] > Documentation=https://docs.docker.com
	I0421 18:47:05.412104   14120 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0421 18:47:05.412155   14120 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0421 18:47:05.412155   14120 command_runner.go:130] > StartLimitBurst=3
	I0421 18:47:05.412155   14120 command_runner.go:130] > StartLimitIntervalSec=60
	I0421 18:47:05.412155   14120 command_runner.go:130] > [Service]
	I0421 18:47:05.412155   14120 command_runner.go:130] > Type=notify
	I0421 18:47:05.412202   14120 command_runner.go:130] > Restart=on-failure
	I0421 18:47:05.412202   14120 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0421 18:47:05.412230   14120 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0421 18:47:05.412230   14120 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0421 18:47:05.412230   14120 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0421 18:47:05.412230   14120 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0421 18:47:05.412230   14120 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0421 18:47:05.412230   14120 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0421 18:47:05.412230   14120 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0421 18:47:05.412230   14120 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0421 18:47:05.412230   14120 command_runner.go:130] > ExecStart=
	I0421 18:47:05.412230   14120 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0421 18:47:05.412230   14120 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0421 18:47:05.412230   14120 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0421 18:47:05.412230   14120 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0421 18:47:05.412230   14120 command_runner.go:130] > LimitNOFILE=infinity
	I0421 18:47:05.412230   14120 command_runner.go:130] > LimitNPROC=infinity
	I0421 18:47:05.412230   14120 command_runner.go:130] > LimitCORE=infinity
	I0421 18:47:05.412230   14120 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0421 18:47:05.412230   14120 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0421 18:47:05.412230   14120 command_runner.go:130] > TasksMax=infinity
	I0421 18:47:05.412769   14120 command_runner.go:130] > TimeoutStartSec=0
	I0421 18:47:05.412816   14120 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0421 18:47:05.412816   14120 command_runner.go:130] > Delegate=yes
	I0421 18:47:05.412816   14120 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0421 18:47:05.412816   14120 command_runner.go:130] > KillMode=process
	I0421 18:47:05.412816   14120 command_runner.go:130] > [Install]
	I0421 18:47:05.412873   14120 command_runner.go:130] > WantedBy=multi-user.target
	I0421 18:47:05.427114   14120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:47:05.474277   14120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 18:47:05.531750   14120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 18:47:05.574854   14120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 18:47:05.607589   14120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 18:47:05.645031   14120 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0421 18:47:05.659750   14120 ssh_runner.go:195] Run: which cri-dockerd
	I0421 18:47:05.666947   14120 command_runner.go:130] > /usr/bin/cri-dockerd
	I0421 18:47:05.682261   14120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 18:47:05.707182   14120 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 18:47:05.758156   14120 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 18:47:06.079663   14120 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 18:47:06.390457   14120 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 18:47:06.390488   14120 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 18:47:06.440553   14120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:47:06.748726   14120 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 18:47:19.810745   14120 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0619266s)
	I0421 18:47:19.823732   14120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 18:47:19.883450   14120 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0421 18:47:19.937283   14120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 18:47:19.980347   14120 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 18:47:20.226997   14120 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 18:47:20.464693   14120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:47:20.712553   14120 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 18:47:20.760840   14120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 18:47:20.807058   14120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:47:21.078192   14120 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 18:47:21.236998   14120 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 18:47:21.250946   14120 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 18:47:21.259813   14120 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0421 18:47:21.259813   14120 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 18:47:21.259813   14120 command_runner.go:130] > Device: 0,22	Inode: 1515        Links: 1
	I0421 18:47:21.259813   14120 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0421 18:47:21.259813   14120 command_runner.go:130] > Access: 2024-04-21 18:47:21.117457005 +0000
	I0421 18:47:21.259937   14120 command_runner.go:130] > Modify: 2024-04-21 18:47:21.117457005 +0000
	I0421 18:47:21.259937   14120 command_runner.go:130] > Change: 2024-04-21 18:47:21.122456923 +0000
	I0421 18:47:21.259937   14120 command_runner.go:130] >  Birth: -
	I0421 18:47:21.259999   14120 start.go:562] Will wait 60s for crictl version
	I0421 18:47:21.273135   14120 ssh_runner.go:195] Run: which crictl
	I0421 18:47:21.279700   14120 command_runner.go:130] > /usr/bin/crictl
	I0421 18:47:21.293553   14120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 18:47:21.352444   14120 command_runner.go:130] > Version:  0.1.0
	I0421 18:47:21.352512   14120 command_runner.go:130] > RuntimeName:  docker
	I0421 18:47:21.352512   14120 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0421 18:47:21.352512   14120 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 18:47:21.352512   14120 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 18:47:21.363931   14120 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 18:47:21.399897   14120 command_runner.go:130] > 26.0.1
	I0421 18:47:21.409897   14120 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 18:47:21.444940   14120 command_runner.go:130] > 26.0.1
	I0421 18:47:21.450258   14120 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 18:47:21.450258   14120 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 18:47:21.454251   14120 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 18:47:21.454251   14120 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 18:47:21.454251   14120 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 18:47:21.454251   14120 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 18:47:21.457364   14120 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 18:47:21.457364   14120 ip.go:210] interface addr: 172.27.192.1/20
	I0421 18:47:21.469296   14120 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 18:47:21.476835   14120 command_runner.go:130] > 172.27.192.1	host.minikube.internal
	I0421 18:47:21.477179   14120 kubeadm.go:877] updating cluster {Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional
-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.199.19 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 18:47:21.477179   14120 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 18:47:21.487561   14120 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 18:47:21.512549   14120 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0421 18:47:21.512549   14120 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0421 18:47:21.512549   14120 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0421 18:47:21.512549   14120 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0421 18:47:21.512549   14120 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0421 18:47:21.512549   14120 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0421 18:47:21.512549   14120 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0421 18:47:21.512698   14120 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 18:47:21.512771   14120 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0421 18:47:21.512828   14120 docker.go:615] Images already preloaded, skipping extraction
	I0421 18:47:21.525372   14120 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 18:47:21.550951   14120 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0421 18:47:21.551621   14120 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0421 18:47:21.551621   14120 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0421 18:47:21.551745   14120 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0421 18:47:21.551745   14120 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0421 18:47:21.551745   14120 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0421 18:47:21.551745   14120 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0421 18:47:21.551745   14120 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 18:47:21.551811   14120 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0421 18:47:21.551811   14120 cache_images.go:84] Images are preloaded, skipping loading
	I0421 18:47:21.551811   14120 kubeadm.go:928] updating node { 172.27.199.19 8441 v1.30.0 docker true true} ...
	I0421 18:47:21.551811   14120 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-808300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.199.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-808300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 18:47:21.560422   14120 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0421 18:47:21.595447   14120 command_runner.go:130] > cgroupfs
	I0421 18:47:21.596699   14120 cni.go:84] Creating CNI manager for ""
	I0421 18:47:21.596699   14120 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 18:47:21.596699   14120 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 18:47:21.596699   14120 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.199.19 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-808300 NodeName:functional-808300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.199.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.199.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 18:47:21.597332   14120 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.199.19
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-808300"
	  kubeletExtraArgs:
	    node-ip: 172.27.199.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.199.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 18:47:21.609522   14120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 18:47:21.627213   14120 command_runner.go:130] > kubeadm
	I0421 18:47:21.627213   14120 command_runner.go:130] > kubectl
	I0421 18:47:21.627362   14120 command_runner.go:130] > kubelet
	I0421 18:47:21.627362   14120 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 18:47:21.641448   14120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 18:47:21.661511   14120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0421 18:47:21.697682   14120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 18:47:21.732437   14120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0421 18:47:21.780436   14120 ssh_runner.go:195] Run: grep 172.27.199.19	control-plane.minikube.internal$ /etc/hosts
	I0421 18:47:21.789005   14120 command_runner.go:130] > 172.27.199.19	control-plane.minikube.internal
	I0421 18:47:21.802841   14120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:47:22.074830   14120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:47:22.123041   14120 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300 for IP: 172.27.199.19
	I0421 18:47:22.123041   14120 certs.go:194] generating shared ca certs ...
	I0421 18:47:22.123041   14120 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:47:22.124358   14120 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 18:47:22.124389   14120 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 18:47:22.124938   14120 certs.go:256] generating profile certs ...
	I0421 18:47:22.126146   14120 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.key
	I0421 18:47:22.126787   14120 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\apiserver.key.87e72ae6
	I0421 18:47:22.127332   14120 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\proxy-client.key
	I0421 18:47:22.127462   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 18:47:22.127735   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 18:47:22.127793   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 18:47:22.127793   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 18:47:22.127793   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 18:47:22.128550   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 18:47:22.128898   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 18:47:22.129166   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 18:47:22.130353   14120 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 18:47:22.130873   14120 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 18:47:22.130873   14120 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 18:47:22.130873   14120 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 18:47:22.131416   14120 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 18:47:22.131528   14120 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 18:47:22.132070   14120 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 18:47:22.132267   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:47:22.132267   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 18:47:22.133025   14120 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 18:47:22.134556   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 18:47:22.200403   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 18:47:22.261712   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 18:47:22.321790   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 18:47:22.380348   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 18:47:22.451421   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 18:47:22.513678   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 18:47:22.575049   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 18:47:22.667650   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 18:47:22.726744   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 18:47:22.804343   14120 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 18:47:22.876203   14120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 18:47:22.930551   14120 ssh_runner.go:195] Run: openssl version
	I0421 18:47:22.939547   14120 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 18:47:22.953522   14120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 18:47:22.990179   14120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 18:47:23.003354   14120 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 18:47:23.003492   14120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 18:47:23.016728   14120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 18:47:23.027130   14120 command_runner.go:130] > 3ec20f2e
	I0421 18:47:23.041552   14120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 18:47:23.125657   14120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 18:47:23.188111   14120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:47:23.216166   14120 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:47:23.216293   14120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:47:23.229634   14120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 18:47:23.238624   14120 command_runner.go:130] > b5213941
	I0421 18:47:23.253621   14120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 18:47:23.302260   14120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 18:47:23.341849   14120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 18:47:23.350830   14120 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 18:47:23.350830   14120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 18:47:23.364581   14120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 18:47:23.379772   14120 command_runner.go:130] > 51391683
	I0421 18:47:23.393574   14120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 18:47:23.436275   14120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:47:23.445315   14120 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 18:47:23.445315   14120 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0421 18:47:23.445315   14120 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0421 18:47:23.445315   14120 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 18:47:23.445315   14120 command_runner.go:130] > Access: 2024-04-21 18:44:33.909859711 +0000
	I0421 18:47:23.445315   14120 command_runner.go:130] > Modify: 2024-04-21 18:44:33.909859711 +0000
	I0421 18:47:23.445315   14120 command_runner.go:130] > Change: 2024-04-21 18:44:33.909859711 +0000
	I0421 18:47:23.445315   14120 command_runner.go:130] >  Birth: 2024-04-21 18:44:33.909859711 +0000
	I0421 18:47:23.457275   14120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 18:47:23.477302   14120 command_runner.go:130] > Certificate will not expire
	I0421 18:47:23.491033   14120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 18:47:23.505959   14120 command_runner.go:130] > Certificate will not expire
	I0421 18:47:23.522044   14120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 18:47:23.535049   14120 command_runner.go:130] > Certificate will not expire
	I0421 18:47:23.548048   14120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 18:47:23.558098   14120 command_runner.go:130] > Certificate will not expire
	I0421 18:47:23.572049   14120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 18:47:23.587222   14120 command_runner.go:130] > Certificate will not expire
	I0421 18:47:23.601503   14120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 18:47:23.611500   14120 command_runner.go:130] > Certificate will not expire
	I0421 18:47:23.612498   14120 kubeadm.go:391] StartCluster: {Name:functional-808300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-80
8300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.199.19 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:47:23.621484   14120 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 18:47:23.695097   14120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 18:47:23.722257   14120 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0421 18:47:23.722324   14120 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0421 18:47:23.722324   14120 command_runner.go:130] > /var/lib/minikube/etcd:
	I0421 18:47:23.722324   14120 command_runner.go:130] > member
	W0421 18:47:23.722324   14120 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 18:47:23.722465   14120 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 18:47:23.722574   14120 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 18:47:23.736697   14120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 18:47:23.763352   14120 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:47:23.764830   14120 kubeconfig.go:125] found "functional-808300" server: "https://172.27.199.19:8441"
	I0421 18:47:23.766316   14120 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:47:23.767181   14120 kapi.go:59] client config for functional-808300: &rest.Config{Host:"https://172.27.199.19:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-808300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-808300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 18:47:23.768575   14120 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 18:47:23.780417   14120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 18:47:23.821819   14120 kubeadm.go:624] The running cluster does not require reconfiguration: 172.27.199.19
	I0421 18:47:23.821909   14120 kubeadm.go:1154] stopping kube-system containers ...
	I0421 18:47:23.833050   14120 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 18:47:23.952560   14120 command_runner.go:130] > d697d660a8f7
	I0421 18:47:23.952682   14120 command_runner.go:130] > f3cd92f6f6fa
	I0421 18:47:23.952682   14120 command_runner.go:130] > fb8c1c2ac7c0
	I0421 18:47:23.952682   14120 command_runner.go:130] > 36dee7208f35
	I0421 18:47:23.952682   14120 command_runner.go:130] > fd569f7642c5
	I0421 18:47:23.952682   14120 command_runner.go:130] > 64339f31aff4
	I0421 18:47:23.952752   14120 command_runner.go:130] > ac9ce1b9a1c6
	I0421 18:47:23.952752   14120 command_runner.go:130] > 12ddfb2f2a47
	I0421 18:47:23.952752   14120 command_runner.go:130] > e3c37cf69583
	I0421 18:47:23.952752   14120 command_runner.go:130] > 4e5b0f820bc7
	I0421 18:47:23.952825   14120 command_runner.go:130] > 42f418f31377
	I0421 18:47:23.952825   14120 command_runner.go:130] > e5ce3ab7d2b3
	I0421 18:47:23.952825   14120 command_runner.go:130] > c4ea8ef69f39
	I0421 18:47:23.952825   14120 command_runner.go:130] > d88b146da260
	I0421 18:47:23.952825   14120 command_runner.go:130] > 3d49907b4986
	I0421 18:47:23.952825   14120 command_runner.go:130] > 9d785bc72922
	I0421 18:47:23.952896   14120 command_runner.go:130] > c5c308231267
	I0421 18:47:23.952896   14120 command_runner.go:130] > da1ec584014a
	I0421 18:47:23.952896   14120 command_runner.go:130] > 3dce17607591
	I0421 18:47:23.952896   14120 command_runner.go:130] > 4843b87bfab9
	I0421 18:47:23.952896   14120 command_runner.go:130] > e91acb9f349e
	I0421 18:47:23.952896   14120 command_runner.go:130] > 86baab72494c
	I0421 18:47:23.952962   14120 command_runner.go:130] > eefa71652071
	I0421 18:47:23.952962   14120 command_runner.go:130] > 95c307e96427
	I0421 18:47:23.952962   14120 command_runner.go:130] > 01c4a6616762
	I0421 18:47:23.952962   14120 command_runner.go:130] > a363a1f0c0af
	I0421 18:47:23.952962   14120 command_runner.go:130] > 4d48df16d365
	I0421 18:47:23.953031   14120 command_runner.go:130] > 35a582884d9e
	I0421 18:47:23.953207   14120 docker.go:483] Stopping containers: [d697d660a8f7 f3cd92f6f6fa fb8c1c2ac7c0 36dee7208f35 fd569f7642c5 64339f31aff4 ac9ce1b9a1c6 12ddfb2f2a47 e3c37cf69583 4e5b0f820bc7 42f418f31377 e5ce3ab7d2b3 c4ea8ef69f39 d88b146da260 3d49907b4986 9d785bc72922 c5c308231267 da1ec584014a 3dce17607591 4843b87bfab9 e91acb9f349e 86baab72494c eefa71652071 95c307e96427 01c4a6616762 a363a1f0c0af 4d48df16d365 35a582884d9e]
	I0421 18:47:23.970164   14120 ssh_runner.go:195] Run: docker stop d697d660a8f7 f3cd92f6f6fa fb8c1c2ac7c0 36dee7208f35 fd569f7642c5 64339f31aff4 ac9ce1b9a1c6 12ddfb2f2a47 e3c37cf69583 4e5b0f820bc7 42f418f31377 e5ce3ab7d2b3 c4ea8ef69f39 d88b146da260 3d49907b4986 9d785bc72922 c5c308231267 da1ec584014a 3dce17607591 4843b87bfab9 e91acb9f349e 86baab72494c eefa71652071 95c307e96427 01c4a6616762 a363a1f0c0af 4d48df16d365 35a582884d9e
	I0421 18:47:25.299075   14120 command_runner.go:130] > d697d660a8f7
	I0421 18:47:25.299164   14120 command_runner.go:130] > f3cd92f6f6fa
	I0421 18:47:25.299164   14120 command_runner.go:130] > fb8c1c2ac7c0
	I0421 18:47:25.299164   14120 command_runner.go:130] > 36dee7208f35
	I0421 18:47:25.299164   14120 command_runner.go:130] > fd569f7642c5
	I0421 18:47:25.299164   14120 command_runner.go:130] > 64339f31aff4
	I0421 18:47:25.299164   14120 command_runner.go:130] > ac9ce1b9a1c6
	I0421 18:47:25.299164   14120 command_runner.go:130] > 12ddfb2f2a47
	I0421 18:47:25.299164   14120 command_runner.go:130] > e3c37cf69583
	I0421 18:47:25.299164   14120 command_runner.go:130] > 4e5b0f820bc7
	I0421 18:47:25.299164   14120 command_runner.go:130] > 42f418f31377
	I0421 18:47:25.299164   14120 command_runner.go:130] > e5ce3ab7d2b3
	I0421 18:47:25.299164   14120 command_runner.go:130] > c4ea8ef69f39
	I0421 18:47:25.299164   14120 command_runner.go:130] > d88b146da260
	I0421 18:47:25.299164   14120 command_runner.go:130] > 3d49907b4986
	I0421 18:47:25.299164   14120 command_runner.go:130] > 9d785bc72922
	I0421 18:47:25.299164   14120 command_runner.go:130] > c5c308231267
	I0421 18:47:25.299164   14120 command_runner.go:130] > da1ec584014a
	I0421 18:47:25.299164   14120 command_runner.go:130] > 3dce17607591
	I0421 18:47:25.299164   14120 command_runner.go:130] > 4843b87bfab9
	I0421 18:47:25.299164   14120 command_runner.go:130] > e91acb9f349e
	I0421 18:47:25.299705   14120 command_runner.go:130] > 86baab72494c
	I0421 18:47:25.299705   14120 command_runner.go:130] > eefa71652071
	I0421 18:47:25.299789   14120 command_runner.go:130] > 95c307e96427
	I0421 18:47:25.299789   14120 command_runner.go:130] > 01c4a6616762
	I0421 18:47:25.299851   14120 command_runner.go:130] > a363a1f0c0af
	I0421 18:47:25.299851   14120 command_runner.go:130] > 4d48df16d365
	I0421 18:47:25.299851   14120 command_runner.go:130] > 35a582884d9e
	I0421 18:47:25.299851   14120 ssh_runner.go:235] Completed: docker stop d697d660a8f7 f3cd92f6f6fa fb8c1c2ac7c0 36dee7208f35 fd569f7642c5 64339f31aff4 ac9ce1b9a1c6 12ddfb2f2a47 e3c37cf69583 4e5b0f820bc7 42f418f31377 e5ce3ab7d2b3 c4ea8ef69f39 d88b146da260 3d49907b4986 9d785bc72922 c5c308231267 da1ec584014a 3dce17607591 4843b87bfab9 e91acb9f349e 86baab72494c eefa71652071 95c307e96427 01c4a6616762 a363a1f0c0af 4d48df16d365 35a582884d9e: (1.3296773s)
	I0421 18:47:25.313827   14120 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 18:47:25.384843   14120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 18:47:25.413459   14120 command_runner.go:130] > -rw------- 1 root root 5647 Apr 21 18:44 /etc/kubernetes/admin.conf
	I0421 18:47:25.413459   14120 command_runner.go:130] > -rw------- 1 root root 5657 Apr 21 18:44 /etc/kubernetes/controller-manager.conf
	I0421 18:47:25.413459   14120 command_runner.go:130] > -rw------- 1 root root 2007 Apr 21 18:44 /etc/kubernetes/kubelet.conf
	I0421 18:47:25.413459   14120 command_runner.go:130] > -rw------- 1 root root 5605 Apr 21 18:44 /etc/kubernetes/scheduler.conf
	I0421 18:47:25.413459   14120 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Apr 21 18:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Apr 21 18:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Apr 21 18:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Apr 21 18:44 /etc/kubernetes/scheduler.conf
	
	I0421 18:47:25.427464   14120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0421 18:47:25.460717   14120 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0421 18:47:25.475669   14120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0421 18:47:25.496885   14120 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0421 18:47:25.511085   14120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0421 18:47:25.528352   14120 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:47:25.546417   14120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 18:47:25.587467   14120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0421 18:47:25.604180   14120 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0421 18:47:25.618821   14120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 18:47:25.654988   14120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 18:47:25.680111   14120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 18:47:25.771092   14120 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 18:47:25.771176   14120 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0421 18:47:25.771176   14120 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0421 18:47:25.771176   14120 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 18:47:25.771176   14120 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0421 18:47:25.771243   14120 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0421 18:47:25.771243   14120 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0421 18:47:25.771243   14120 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0421 18:47:25.771243   14120 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0421 18:47:25.771312   14120 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 18:47:25.771312   14120 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 18:47:25.771312   14120 command_runner.go:130] > [certs] Using the existing "sa" key
	I0421 18:47:25.771462   14120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 18:47:27.499405   14120 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 18:47:27.499493   14120 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0421 18:47:27.499577   14120 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0421 18:47:27.499577   14120 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0421 18:47:27.499577   14120 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 18:47:27.499577   14120 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 18:47:27.499577   14120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.7281024s)
	I0421 18:47:27.499685   14120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 18:47:27.918809   14120 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 18:47:27.918971   14120 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 18:47:27.918971   14120 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0421 18:47:27.918971   14120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 18:47:28.049718   14120 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 18:47:28.049803   14120 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 18:47:28.049803   14120 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 18:47:28.049803   14120 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 18:47:28.050004   14120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 18:47:28.185547   14120 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 18:47:28.185678   14120 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:47:28.199135   14120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:47:28.704154   14120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:47:29.213661   14120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:47:29.708207   14120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:47:30.211997   14120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:47:30.240103   14120 command_runner.go:130] > 6049
	I0421 18:47:30.240103   14120 api_server.go:72] duration metric: took 2.0544102s to wait for apiserver process to appear ...
	I0421 18:47:30.240103   14120 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:47:30.240103   14120 api_server.go:253] Checking apiserver healthz at https://172.27.199.19:8441/healthz ...
	I0421 18:47:33.680645   14120 api_server.go:279] https://172.27.199.19:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 18:47:33.680645   14120 api_server.go:103] status: https://172.27.199.19:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 18:47:33.680645   14120 api_server.go:253] Checking apiserver healthz at https://172.27.199.19:8441/healthz ...
	I0421 18:47:33.708971   14120 api_server.go:279] https://172.27.199.19:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 18:47:33.709413   14120 api_server.go:103] status: https://172.27.199.19:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 18:47:33.754953   14120 api_server.go:253] Checking apiserver healthz at https://172.27.199.19:8441/healthz ...
	I0421 18:47:33.823251   14120 api_server.go:279] https://172.27.199.19:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 18:47:33.823339   14120 api_server.go:103] status: https://172.27.199.19:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 18:47:34.248951   14120 api_server.go:253] Checking apiserver healthz at https://172.27.199.19:8441/healthz ...
	I0421 18:47:34.263700   14120 api_server.go:279] https://172.27.199.19:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 18:47:34.263766   14120 api_server.go:103] status: https://172.27.199.19:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 18:47:34.740615   14120 api_server.go:253] Checking apiserver healthz at https://172.27.199.19:8441/healthz ...
	I0421 18:47:34.750046   14120 api_server.go:279] https://172.27.199.19:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 18:47:34.750118   14120 api_server.go:103] status: https://172.27.199.19:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 18:47:35.248476   14120 api_server.go:253] Checking apiserver healthz at https://172.27.199.19:8441/healthz ...
	I0421 18:47:35.257147   14120 api_server.go:279] https://172.27.199.19:8441/healthz returned 200:
	ok
	I0421 18:47:35.257418   14120 round_trippers.go:463] GET https://172.27.199.19:8441/version
	I0421 18:47:35.257501   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:35.257666   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:35.257733   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:35.274037   14120 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0421 18:47:35.274037   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:35.274037   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:35.274037   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:35.274037   14120 round_trippers.go:580]     Content-Length: 263
	I0421 18:47:35.274037   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:35 GMT
	I0421 18:47:35.274037   14120 round_trippers.go:580]     Audit-Id: 708e863e-21de-4ee4-b838-cf83dd7de7dc
	I0421 18:47:35.274037   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:35.274037   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:35.274037   14120 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0421 18:47:35.275129   14120 api_server.go:141] control plane version: v1.30.0
	I0421 18:47:35.275192   14120 api_server.go:131] duration metric: took 5.0349903s to wait for apiserver health ...
	I0421 18:47:35.275192   14120 cni.go:84] Creating CNI manager for ""
	I0421 18:47:35.275192   14120 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 18:47:35.279840   14120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0421 18:47:35.296573   14120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0421 18:47:35.322756   14120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0421 18:47:35.366223   14120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:47:35.366510   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods
	I0421 18:47:35.366580   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:35.366580   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:35.366580   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:35.384839   14120 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0421 18:47:35.385295   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:35.385358   14120 round_trippers.go:580]     Audit-Id: 634745d5-bdaa-4f7f-80c2-eecdb93c33d4
	I0421 18:47:35.385358   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:35.385358   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:35.385456   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:35.385456   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:35.385508   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:35 GMT
	I0421 18:47:35.386743   14120 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-g2fk9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75","resourceVersion":"510","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8ca42f95-77b3-4d85-90ae-7995d5546e14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ca42f95-77b3-4d85-90ae-7995d5546e14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52249 chars]
	I0421 18:47:35.392552   14120 system_pods.go:59] 7 kube-system pods found
	I0421 18:47:35.392552   14120 system_pods.go:61] "coredns-7db6d8ff4d-g2fk9" [a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0421 18:47:35.392552   14120 system_pods.go:61] "etcd-functional-808300" [0426dc0d-4f18-437a-a64d-213be30ceae3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0421 18:47:35.392552   14120 system_pods.go:61] "kube-apiserver-functional-808300" [6c8fa5ce-1fde-446e-a0c9-a204acb6dd7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0421 18:47:35.392552   14120 system_pods.go:61] "kube-controller-manager-functional-808300" [f66b9bfd-321d-458e-b897-b4d57a3a419e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0421 18:47:35.393108   14120 system_pods.go:61] "kube-proxy-r68j6" [343cddb9-92cd-4313-a597-cce17924b2d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0421 18:47:35.393172   14120 system_pods.go:61] "kube-scheduler-functional-808300" [7f2bc37e-7207-463e-9444-f360c05fdbbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0421 18:47:35.393172   14120 system_pods.go:61] "storage-provisioner" [24f4cf93-e486-46a0-89a5-a94fe4593b32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0421 18:47:35.393172   14120 system_pods.go:74] duration metric: took 26.9483ms to wait for pod list to return data ...
	I0421 18:47:35.393224   14120 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:47:35.393357   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes
	I0421 18:47:35.393414   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:35.393414   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:35.393414   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:35.399261   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:35.399261   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:35.399261   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:35.399261   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:35 GMT
	I0421 18:47:35.399261   14120 round_trippers.go:580]     Audit-Id: 395fa9e4-7dcd-4179-abf1-5ed08b75b043
	I0421 18:47:35.399261   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:35.399261   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:35.399261   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:35.399261   14120 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0421 18:47:35.401095   14120 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:47:35.401159   14120 node_conditions.go:123] node cpu capacity is 2
	I0421 18:47:35.401159   14120 node_conditions.go:105] duration metric: took 7.9345ms to run NodePressure ...
	I0421 18:47:35.401218   14120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 18:47:36.343743   14120 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0421 18:47:36.343743   14120 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0421 18:47:36.343819   14120 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0421 18:47:36.343988   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0421 18:47:36.344090   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:36.344090   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:36.344090   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:36.351286   14120 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 18:47:36.351286   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:36.351286   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:36 GMT
	I0421 18:47:36.351286   14120 round_trippers.go:580]     Audit-Id: 6fca603f-1723-4994-b783-a9a9f8b2088d
	I0421 18:47:36.351286   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:36.351286   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:36.351286   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:36.351286   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:36.352290   14120 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"521"},"items":[{"metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31681 chars]
	I0421 18:47:36.354300   14120 kubeadm.go:733] kubelet initialised
	I0421 18:47:36.354300   14120 kubeadm.go:734] duration metric: took 10.4808ms waiting for restarted kubelet to initialise ...
	I0421 18:47:36.354300   14120 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:47:36.354300   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods
	I0421 18:47:36.354300   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:36.354300   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:36.354300   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:36.372292   14120 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0421 18:47:36.372292   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:36.372292   14120 round_trippers.go:580]     Audit-Id: b1eae393-6a75-40e6-a255-0daada412ae6
	I0421 18:47:36.372292   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:36.372292   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:36.372292   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:36.372292   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:36.372292   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:36 GMT
	I0421 18:47:36.373283   14120 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"521"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-g2fk9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75","resourceVersion":"510","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8ca42f95-77b3-4d85-90ae-7995d5546e14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ca42f95-77b3-4d85-90ae-7995d5546e14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52249 chars]
	I0421 18:47:36.376275   14120 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-g2fk9" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:36.377289   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g2fk9
	I0421 18:47:36.377289   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:36.377289   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:36.377289   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:36.380286   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:36.380379   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:36.380379   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:36 GMT
	I0421 18:47:36.380379   14120 round_trippers.go:580]     Audit-Id: 080f21cf-59c0-410f-929b-88a888930c86
	I0421 18:47:36.380379   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:36.380379   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:36.380379   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:36.380379   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:36.380685   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-g2fk9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75","resourceVersion":"510","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8ca42f95-77b3-4d85-90ae-7995d5546e14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ca42f95-77b3-4d85-90ae-7995d5546e14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0421 18:47:36.381553   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:36.381553   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:36.381553   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:36.381618   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:36.383921   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:36.384952   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:36.384952   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:36.384952   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:36.384952   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:36.384952   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:36.384952   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:36 GMT
	I0421 18:47:36.385018   14120 round_trippers.go:580]     Audit-Id: 1bc7da66-3cd7-458b-a463-46afa1d7fb74
	I0421 18:47:36.385279   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:36.886223   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g2fk9
	I0421 18:47:36.886485   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:36.886485   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:36.886485   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:36.891251   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:36.891251   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:36.891251   14120 round_trippers.go:580]     Audit-Id: cc789f23-7e53-4b0d-a5ae-f6908fa31b37
	I0421 18:47:36.891251   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:36.891251   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:36.891251   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:36.891251   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:36.891251   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:36 GMT
	I0421 18:47:36.891540   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-g2fk9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75","resourceVersion":"524","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8ca42f95-77b3-4d85-90ae-7995d5546e14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ca42f95-77b3-4d85-90ae-7995d5546e14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0421 18:47:36.892416   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:36.892445   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:36.892445   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:36.892445   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:36.899398   14120 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:47:36.899398   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:36.899398   14120 round_trippers.go:580]     Audit-Id: f551faa9-3334-43f7-923d-1d60ab3b6f64
	I0421 18:47:36.899398   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:36.899398   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:36.899398   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:36.899398   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:36.899398   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:36 GMT
	I0421 18:47:36.899398   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:36.900095   14120 pod_ready.go:92] pod "coredns-7db6d8ff4d-g2fk9" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:36.900095   14120 pod_ready.go:81] duration metric: took 522.8016ms for pod "coredns-7db6d8ff4d-g2fk9" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:36.900095   14120 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:36.900095   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:36.900095   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:36.900095   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:36.900095   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:36.903239   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:36.903239   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:36.903239   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:36 GMT
	I0421 18:47:36.903239   14120 round_trippers.go:580]     Audit-Id: 2ba24163-87ee-4705-a53f-423dcbccdc29
	I0421 18:47:36.903239   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:36.903239   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:36.903239   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:36.903239   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:36.903819   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:36.904525   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:36.904557   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:36.904557   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:36.904557   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:36.906738   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:36.906738   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:36.906738   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:36.906738   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:36 GMT
	I0421 18:47:36.906738   14120 round_trippers.go:580]     Audit-Id: 4491aef8-190f-4378-8c4e-bc2095440cb6
	I0421 18:47:36.906738   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:36.907158   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:36.907158   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:36.907740   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:37.400980   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:37.400980   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:37.400980   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:37.400980   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:37.405190   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:37.405190   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:37.405190   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:37.405190   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:37.405190   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:37 GMT
	I0421 18:47:37.405190   14120 round_trippers.go:580]     Audit-Id: d08fc576-0eba-4144-945d-6212784511d0
	I0421 18:47:37.405190   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:37.405190   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:37.405190   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:37.406324   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:37.406400   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:37.406400   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:37.406400   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:37.409225   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:37.409225   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:37.409225   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:37.409225   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:37.409225   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:37.409225   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:37 GMT
	I0421 18:47:37.409225   14120 round_trippers.go:580]     Audit-Id: 7ae20482-a541-422f-8c9a-c853cba13c13
	I0421 18:47:37.409225   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:37.410634   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:37.902355   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:37.902355   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:37.902355   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:37.902355   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:37.906920   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:37.907119   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:37.907119   14120 round_trippers.go:580]     Audit-Id: b2a9bb42-0f2f-4029-9b7a-d598a51148ec
	I0421 18:47:37.907119   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:37.907119   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:37.907119   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:37.907119   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:37.907119   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:37 GMT
	I0421 18:47:37.907664   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:37.908395   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:37.908454   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:37.908454   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:37.908454   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:37.911093   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:37.911093   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:37.911093   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:37 GMT
	I0421 18:47:37.911093   14120 round_trippers.go:580]     Audit-Id: 60ac23d7-0b64-46ee-bc6a-b8d3e83376cc
	I0421 18:47:37.911093   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:37.911690   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:37.911690   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:37.911690   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:37.911759   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:38.405210   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:38.405210   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:38.405210   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:38.405210   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:38.409801   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:38.410075   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:38.410075   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:38 GMT
	I0421 18:47:38.410075   14120 round_trippers.go:580]     Audit-Id: c4caad30-7883-4203-ae4e-262c8c8b6fcd
	I0421 18:47:38.410075   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:38.410075   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:38.410182   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:38.410182   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:38.410412   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:38.411043   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:38.411043   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:38.411043   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:38.411043   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:38.413631   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:38.414420   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:38.414420   14120 round_trippers.go:580]     Audit-Id: df7a25a8-a7d1-4542-bdcf-3f8a2f34692f
	I0421 18:47:38.414509   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:38.414509   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:38.414509   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:38.414509   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:38.414509   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:38 GMT
	I0421 18:47:38.414509   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:38.906113   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:38.906346   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:38.906346   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:38.906346   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:38.920166   14120 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 18:47:38.920166   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:38.920621   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:38.920621   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:38.920662   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:38.920662   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:38 GMT
	I0421 18:47:38.920757   14120 round_trippers.go:580]     Audit-Id: 97e8f28a-3e14-4ff4-959f-7ae43fea537e
	I0421 18:47:38.920757   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:38.921010   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:38.921368   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:38.921368   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:38.921368   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:38.921368   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:38.928169   14120 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:47:38.928169   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:38.928169   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:38.928169   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:38.928169   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:38 GMT
	I0421 18:47:38.928169   14120 round_trippers.go:580]     Audit-Id: a97b4b80-fec6-4eb0-8c69-e777a18deed5
	I0421 18:47:38.929021   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:38.929021   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:38.929391   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:38.929505   14120 pod_ready.go:102] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"False"
	I0421 18:47:39.410160   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:39.410394   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:39.410394   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:39.410457   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:39.425056   14120 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0421 18:47:39.425056   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:39.425596   14120 round_trippers.go:580]     Audit-Id: de4db9ba-9d1b-467f-9b8b-dbb4e6241dfb
	I0421 18:47:39.425596   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:39.425596   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:39.425596   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:39.425596   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:39.425596   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:39 GMT
	I0421 18:47:39.425887   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:39.427046   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:39.427133   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:39.427133   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:39.427133   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:39.433516   14120 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:47:39.433516   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:39.433516   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:39 GMT
	I0421 18:47:39.433516   14120 round_trippers.go:580]     Audit-Id: 6142e816-c3b9-418d-9c50-157229ff9f8e
	I0421 18:47:39.433516   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:39.433516   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:39.433516   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:39.433516   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:39.434151   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:39.906488   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:39.906665   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:39.906665   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:39.906665   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:39.909976   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:39.910983   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:39.911062   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:39.911062   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:39.911062   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:39 GMT
	I0421 18:47:39.911062   14120 round_trippers.go:580]     Audit-Id: 7e677616-9649-4405-8142-1168a61bd8f6
	I0421 18:47:39.911062   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:39.911062   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:39.911384   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:39.912480   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:39.912480   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:39.912580   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:39.912580   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:39.915739   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:39.915739   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:39.916229   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:39.916229   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:39.916229   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:39 GMT
	I0421 18:47:39.916229   14120 round_trippers.go:580]     Audit-Id: 4cf475bf-ef0a-41e9-8d33-5e40e5511e7b
	I0421 18:47:39.916229   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:39.916229   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:39.916593   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:40.407403   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:40.407403   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:40.407403   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:40.407403   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:40.410970   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:40.411343   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:40.411343   14120 round_trippers.go:580]     Audit-Id: 026efd62-b987-4c89-8804-f9068dadc4bb
	I0421 18:47:40.411343   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:40.411343   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:40.411343   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:40.411343   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:40.411343   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:40 GMT
	I0421 18:47:40.411653   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:40.412286   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:40.412286   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:40.412286   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:40.412286   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:40.415760   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:40.415760   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:40.415959   14120 round_trippers.go:580]     Audit-Id: 2f7c0e59-db36-4e63-b74f-29274cb91f54
	I0421 18:47:40.415959   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:40.415959   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:40.415959   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:40.415959   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:40.416067   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:40 GMT
	I0421 18:47:40.416151   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:40.907744   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:40.907744   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:40.907744   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:40.907744   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:40.912383   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:40.913215   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:40.913215   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:40.913215   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:40.913215   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:40.913215   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:40.913403   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:40 GMT
	I0421 18:47:40.913403   14120 round_trippers.go:580]     Audit-Id: 48526d41-5260-440c-9f48-8b795c4cfe8d
	I0421 18:47:40.914176   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:40.914763   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:40.914763   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:40.914956   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:40.914956   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:40.917168   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:40.917168   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:40.917168   14120 round_trippers.go:580]     Audit-Id: 7aa81fe0-17e6-47b4-a32b-8384d25c2df4
	I0421 18:47:40.917168   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:40.917168   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:40.917168   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:40.917168   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:40.917168   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:40 GMT
	I0421 18:47:40.918484   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:41.407920   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:41.407992   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:41.407992   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:41.407992   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:41.411301   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:41.411301   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:41.411301   14120 round_trippers.go:580]     Audit-Id: 3e33718b-1212-49f7-ac6a-9f39afb97016
	I0421 18:47:41.411301   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:41.411301   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:41.412051   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:41.412051   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:41.412051   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:41 GMT
	I0421 18:47:41.413368   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:41.414853   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:41.414930   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:41.414930   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:41.414930   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:41.419541   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:41.419573   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:41.419573   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:41.419657   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:41.419657   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:41.419657   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:41 GMT
	I0421 18:47:41.419657   14120 round_trippers.go:580]     Audit-Id: face4a5d-4f8f-4d15-86a9-c1f6174c6572
	I0421 18:47:41.419657   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:41.420804   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:41.420888   14120 pod_ready.go:102] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"False"
	I0421 18:47:41.908869   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:41.908943   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:41.909005   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:41.909005   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:41.914021   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:41.914021   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:41.914021   14120 round_trippers.go:580]     Audit-Id: 15e4c867-4efb-4b18-ad4c-248eafa0d160
	I0421 18:47:41.914021   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:41.914021   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:41.914021   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:41.914021   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:41.914021   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:41 GMT
	I0421 18:47:41.914021   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:41.914751   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:41.914751   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:41.914751   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:41.914751   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:41.917333   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:41.917961   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:41.917961   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:41.917961   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:41.917961   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:41.917961   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:41 GMT
	I0421 18:47:41.917961   14120 round_trippers.go:580]     Audit-Id: dd9e6c11-a5a8-4495-902b-91793a566612
	I0421 18:47:41.917961   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:41.917961   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:42.405621   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:42.405741   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:42.405741   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:42.405741   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:42.408995   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:42.408995   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:42.408995   14120 round_trippers.go:580]     Audit-Id: 3ff61e27-34ff-45f8-aad5-4be1cdfea303
	I0421 18:47:42.408995   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:42.408995   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:42.408995   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:42.409703   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:42.409703   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:42 GMT
	I0421 18:47:42.409758   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:42.410690   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:42.410690   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:42.410862   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:42.410862   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:42.413348   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:42.413348   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:42.413348   14120 round_trippers.go:580]     Audit-Id: eb63c280-e814-4ae7-bfe6-f87076588435
	I0421 18:47:42.413348   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:42.413348   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:42.413348   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:42.413348   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:42.413348   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:42 GMT
	I0421 18:47:42.413348   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:42.905510   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:42.905510   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:42.905510   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:42.905510   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:42.909346   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:42.909925   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:42.909925   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:42.909925   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:42 GMT
	I0421 18:47:42.909925   14120 round_trippers.go:580]     Audit-Id: 4bcf0c1b-5800-4ef8-aa65-a269be45977c
	I0421 18:47:42.909925   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:42.909925   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:42.909925   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:42.910223   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:42.910963   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:42.910963   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:42.911026   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:42.911026   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:42.913988   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:42.914132   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:42.914132   14120 round_trippers.go:580]     Audit-Id: 7553380c-90f1-4c57-82e8-495cef671520
	I0421 18:47:42.914132   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:42.914132   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:42.914132   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:42.914132   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:42.914132   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:42 GMT
	I0421 18:47:42.914569   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:43.406621   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:43.406716   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:43.406716   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:43.406716   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:43.410929   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:43.411016   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:43.411016   14120 round_trippers.go:580]     Audit-Id: d97af1b7-bf4f-4f80-b47c-8afea55413df
	I0421 18:47:43.411016   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:43.411016   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:43.411082   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:43.411082   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:43.411082   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:43 GMT
	I0421 18:47:43.411260   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:43.411820   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:43.411968   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:43.411968   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:43.411968   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:43.415258   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:43.415258   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:43.415732   14120 round_trippers.go:580]     Audit-Id: a63fb060-cfb2-4118-909c-3db5e5c3569c
	I0421 18:47:43.415732   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:43.415732   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:43.415732   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:43.415732   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:43.415732   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:43 GMT
	I0421 18:47:43.415946   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:43.907477   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:43.907477   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:43.907567   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:43.907567   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:43.911891   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:43.911891   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:43.912520   14120 round_trippers.go:580]     Audit-Id: 50fcf799-0153-4bda-b9de-71f5aa4a5c81
	I0421 18:47:43.912520   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:43.912520   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:43.912520   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:43.912520   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:43.912520   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:43 GMT
	I0421 18:47:43.912769   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:43.913502   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:43.913558   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:43.913558   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:43.913558   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:43.916906   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:43.916906   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:43.916906   14120 round_trippers.go:580]     Audit-Id: 989f82ce-c40d-4bcb-a549-f0986b7b9140
	I0421 18:47:43.916906   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:43.916906   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:43.916906   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:43.916906   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:43.916906   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:43 GMT
	I0421 18:47:43.917820   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:43.918333   14120 pod_ready.go:102] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"False"
	I0421 18:47:44.407796   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:44.407796   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:44.407796   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:44.407796   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:44.412733   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:44.412733   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:44.412733   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:44.412733   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:44.412733   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:44.412822   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:44 GMT
	I0421 18:47:44.412822   14120 round_trippers.go:580]     Audit-Id: 97d65cdb-7800-4c42-9e17-4c0610a67763
	I0421 18:47:44.412822   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:44.413054   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:44.413911   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:44.414045   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:44.414106   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:44.414106   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:44.417870   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:44.417937   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:44.417937   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:44.417937   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:44.417937   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:44.417937   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:44 GMT
	I0421 18:47:44.417937   14120 round_trippers.go:580]     Audit-Id: 90a002f7-956f-48a9-8243-ea32d9b24a3e
	I0421 18:47:44.417937   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:44.418231   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:44.907099   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:44.907099   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:44.907099   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:44.907099   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:44.913708   14120 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:47:44.913708   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:44.913708   14120 round_trippers.go:580]     Audit-Id: 2413d249-b3a9-4453-98df-454d23bf7d0b
	I0421 18:47:44.913819   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:44.913819   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:44.913819   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:44.913819   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:44.913819   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:44 GMT
	I0421 18:47:44.914148   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"514","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6593 chars]
	I0421 18:47:44.914891   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:44.914891   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:44.914891   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:44.914891   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:44.918467   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:44.918467   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:44.918467   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:44 GMT
	I0421 18:47:44.918467   14120 round_trippers.go:580]     Audit-Id: 1a52506e-ca04-4358-a63d-6f279d209b8b
	I0421 18:47:44.918734   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:44.918734   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:44.918734   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:44.918734   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:44.919012   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:45.409146   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:45.409146   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:45.409146   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:45.409146   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:45.414461   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:45.414461   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:45.415474   14120 round_trippers.go:580]     Audit-Id: 04fbb157-edd1-4074-8897-1fd8ae06f804
	I0421 18:47:45.415507   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:45.415549   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:45.415601   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:45.415601   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:45.415601   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:45 GMT
	I0421 18:47:45.415784   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"581","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6369 chars]
	I0421 18:47:45.416553   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:45.416553   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:45.417348   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:45.417380   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:45.423078   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:45.423078   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:45.423078   14120 round_trippers.go:580]     Audit-Id: c5f3480c-92a4-4afe-8166-0ac87936abb1
	I0421 18:47:45.423078   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:45.423078   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:45.423078   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:45.423078   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:45.423078   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:45 GMT
	I0421 18:47:45.423837   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:45.423837   14120 pod_ready.go:92] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:45.423837   14120 pod_ready.go:81] duration metric: took 8.523682s for pod "etcd-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:45.423837   14120 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:45.424494   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300
	I0421 18:47:45.424586   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:45.424586   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:45.424627   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:45.427325   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:45.427325   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:45.427325   14120 round_trippers.go:580]     Audit-Id: 63f65906-3024-48cf-b62f-0a7a02ac993f
	I0421 18:47:45.427325   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:45.427325   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:45.427325   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:45.427325   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:45.427325   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:45 GMT
	I0421 18:47:45.427325   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-808300","namespace":"kube-system","uid":"6c8fa5ce-1fde-446e-a0c9-a204acb6dd7f","resourceVersion":"578","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.199.19:8441","kubernetes.io/config.hash":"0b7bdac9b6749d02966446979275dc66","kubernetes.io/config.mirror":"0b7bdac9b6749d02966446979275dc66","kubernetes.io/config.seen":"2024-04-21T18:44:48.155612590Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8154 chars]
	I0421 18:47:45.428717   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:45.428799   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:45.428799   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:45.428799   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:45.431356   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:45.431975   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:45.431975   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:45.431975   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:45.431975   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:45 GMT
	I0421 18:47:45.431975   14120 round_trippers.go:580]     Audit-Id: 23690871-1e2d-4718-9624-c9b8983e1aaa
	I0421 18:47:45.431975   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:45.431975   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:45.432233   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:45.432684   14120 pod_ready.go:92] pod "kube-apiserver-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:45.432684   14120 pod_ready.go:81] duration metric: took 8.8465ms for pod "kube-apiserver-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:45.432684   14120 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:45.433198   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-808300
	I0421 18:47:45.433243   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:45.433243   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:45.433243   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:45.435498   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:45.435498   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:45.435498   14120 round_trippers.go:580]     Audit-Id: 72796e21-50a2-4eb8-8c60-b17d6d76db20
	I0421 18:47:45.435498   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:45.435498   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:45.435498   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:45.435498   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:45.435498   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:45 GMT
	I0421 18:47:45.435498   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-808300","namespace":"kube-system","uid":"f66b9bfd-321d-458e-b897-b4d57a3a419e","resourceVersion":"513","creationTimestamp":"2024-04-21T18:44:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"34663a7eebab7481865a01efa53e584b","kubernetes.io/config.mirror":"34663a7eebab7481865a01efa53e584b","kubernetes.io/config.seen":"2024-04-21T18:44:39.372917737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7960 chars]
	I0421 18:47:45.436552   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:45.436552   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:45.436552   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:45.436552   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:45.439208   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:45.439208   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:45.439208   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:45.439208   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:45 GMT
	I0421 18:47:45.439208   14120 round_trippers.go:580]     Audit-Id: 5b542333-8eaf-42a1-b69b-3cb93b40195f
	I0421 18:47:45.439208   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:45.439208   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:45.439208   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:45.440055   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:45.937531   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-808300
	I0421 18:47:45.937531   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:45.937531   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:45.937531   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:45.943119   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:45.943119   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:45.943342   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:45.943342   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:45 GMT
	I0421 18:47:45.943342   14120 round_trippers.go:580]     Audit-Id: 401ef5c7-4d20-49e9-9d57-a1584c697400
	I0421 18:47:45.943342   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:45.943342   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:45.943342   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:45.943498   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-808300","namespace":"kube-system","uid":"f66b9bfd-321d-458e-b897-b4d57a3a419e","resourceVersion":"513","creationTimestamp":"2024-04-21T18:44:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"34663a7eebab7481865a01efa53e584b","kubernetes.io/config.mirror":"34663a7eebab7481865a01efa53e584b","kubernetes.io/config.seen":"2024-04-21T18:44:39.372917737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7960 chars]
	I0421 18:47:45.944349   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:45.944411   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:45.944411   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:45.944411   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:45.947128   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:45.947128   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:45.947128   14120 round_trippers.go:580]     Audit-Id: ef77d519-8a92-4f11-a555-272d4a19e9ec
	I0421 18:47:45.947128   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:45.947128   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:45.947128   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:45.947128   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:45.947128   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:45 GMT
	I0421 18:47:45.948082   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:46.440074   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-808300
	I0421 18:47:46.440355   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:46.440355   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:46.440473   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:46.443829   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:46.443829   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:46.444074   14120 round_trippers.go:580]     Audit-Id: ba1dbc08-a37f-4740-af7f-8d183cc2ec00
	I0421 18:47:46.444074   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:46.444074   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:46.444074   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:46.444074   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:46.444074   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:46 GMT
	I0421 18:47:46.444399   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-808300","namespace":"kube-system","uid":"f66b9bfd-321d-458e-b897-b4d57a3a419e","resourceVersion":"583","creationTimestamp":"2024-04-21T18:44:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"34663a7eebab7481865a01efa53e584b","kubernetes.io/config.mirror":"34663a7eebab7481865a01efa53e584b","kubernetes.io/config.seen":"2024-04-21T18:44:39.372917737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7698 chars]
	I0421 18:47:46.445090   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:46.445090   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:46.445090   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:46.445090   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:46.446712   14120 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 18:47:46.447724   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:46.447748   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:46.447748   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:46.447748   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:46.447748   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:46 GMT
	I0421 18:47:46.447748   14120 round_trippers.go:580]     Audit-Id: ca9c2050-fdf2-4ba9-91a8-4fb5d28654df
	I0421 18:47:46.447748   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:46.447915   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:46.448408   14120 pod_ready.go:92] pod "kube-controller-manager-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:46.448408   14120 pod_ready.go:81] duration metric: took 1.0157171s for pod "kube-controller-manager-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:46.448408   14120 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r68j6" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:46.448408   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-proxy-r68j6
	I0421 18:47:46.448408   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:46.448408   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:46.448408   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:46.453711   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:46.453890   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:46.453890   14120 round_trippers.go:580]     Audit-Id: 0f8034c9-f12a-4c6f-a71d-ac82df4deb90
	I0421 18:47:46.453890   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:46.453890   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:46.453890   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:46.453890   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:46.453890   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:46 GMT
	I0421 18:47:46.453890   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r68j6","generateName":"kube-proxy-","namespace":"kube-system","uid":"343cddb9-92cd-4313-a597-cce17924b2d7","resourceVersion":"525","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c987bea6-3de3-42e8-bd1c-08710108f0e3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c987bea6-3de3-42e8-bd1c-08710108f0e3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0421 18:47:46.454830   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:46.454893   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:46.454893   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:46.454959   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:46.457472   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:46.458475   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:46.458500   14120 round_trippers.go:580]     Audit-Id: cfb9421f-2f0f-4cf7-b741-abdf0485a2f6
	I0421 18:47:46.458500   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:46.458500   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:46.458500   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:46.458500   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:46.458500   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:46 GMT
	I0421 18:47:46.458891   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:46.459458   14120 pod_ready.go:92] pod "kube-proxy-r68j6" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:46.459458   14120 pod_ready.go:81] duration metric: took 11.0495ms for pod "kube-proxy-r68j6" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:46.459458   14120 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:46.459458   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0421 18:47:46.459658   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:46.459658   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:46.459658   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:46.462491   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:46.462563   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:46.462563   14120 round_trippers.go:580]     Audit-Id: 5daab7be-db7e-49c7-98fe-5a5202209127
	I0421 18:47:46.462563   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:46.462563   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:46.462644   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:46.462644   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:46.462644   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:46 GMT
	I0421 18:47:46.462959   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-808300","namespace":"kube-system","uid":"7f2bc37e-7207-463e-9444-f360c05fdbbc","resourceVersion":"511","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.mirror":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.seen":"2024-04-21T18:44:48.155614890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5698 chars]
	I0421 18:47:46.463700   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:46.463700   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:46.463759   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:46.463759   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:46.467976   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:46.467976   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:46.468053   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:46.468053   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:46 GMT
	I0421 18:47:46.468053   14120 round_trippers.go:580]     Audit-Id: 62ff7411-5948-4148-a3bf-1c1a314caf83
	I0421 18:47:46.468053   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:46.468053   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:46.468053   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:46.468252   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:46.974424   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0421 18:47:46.974424   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:46.974424   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:46.974424   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:46.978167   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:46.979189   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:46.979250   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:46 GMT
	I0421 18:47:46.979250   14120 round_trippers.go:580]     Audit-Id: bf6a7223-5bb5-4184-8220-aa34f64ab428
	I0421 18:47:46.979250   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:46.979250   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:46.979250   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:46.979250   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:46.979685   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-808300","namespace":"kube-system","uid":"7f2bc37e-7207-463e-9444-f360c05fdbbc","resourceVersion":"511","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.mirror":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.seen":"2024-04-21T18:44:48.155614890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5698 chars]
	I0421 18:47:46.980332   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:46.980332   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:46.980332   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:46.980332   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:46.983460   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:46.983757   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:46.983757   14120 round_trippers.go:580]     Audit-Id: d6f789ff-5718-4ada-a144-573b64d8fe8b
	I0421 18:47:46.983795   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:46.983795   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:46.983795   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:46.983795   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:46.983795   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:46 GMT
	I0421 18:47:46.984178   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:47.471985   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0421 18:47:47.471985   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:47.471985   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:47.471985   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:47.475565   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:47.475565   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:47.475565   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:47.475565   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:47 GMT
	I0421 18:47:47.476511   14120 round_trippers.go:580]     Audit-Id: 2eebf836-0c4e-40a3-990f-bd3629031439
	I0421 18:47:47.476511   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:47.476511   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:47.476511   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:47.476786   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-808300","namespace":"kube-system","uid":"7f2bc37e-7207-463e-9444-f360c05fdbbc","resourceVersion":"511","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.mirror":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.seen":"2024-04-21T18:44:48.155614890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5698 chars]
	I0421 18:47:47.477011   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:47.477011   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:47.477011   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:47.477011   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:47.480615   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:47.480615   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:47.480920   14120 round_trippers.go:580]     Audit-Id: a00db50a-fc0e-4b34-bf0d-c432089c994c
	I0421 18:47:47.480920   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:47.480920   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:47.481012   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:47.481012   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:47.481012   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:47 GMT
	I0421 18:47:47.481319   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:47.974696   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0421 18:47:47.974811   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:47.974811   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:47.974811   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:47.978077   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:47.978980   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:47.978980   14120 round_trippers.go:580]     Audit-Id: 58f14137-d14c-4533-8267-17ddca84d1c2
	I0421 18:47:47.978980   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:47.978980   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:47.978980   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:47.979072   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:47.979072   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:47 GMT
	I0421 18:47:47.979325   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-808300","namespace":"kube-system","uid":"7f2bc37e-7207-463e-9444-f360c05fdbbc","resourceVersion":"590","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.mirror":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.seen":"2024-04-21T18:44:48.155614890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5454 chars]
	I0421 18:47:47.979996   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:47.980136   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:47.980136   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:47.980136   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:47.984066   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:47.984471   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:47.984521   14120 round_trippers.go:580]     Audit-Id: fa180c26-fd0d-4f76-82c7-eb325e3afe59
	I0421 18:47:47.984521   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:47.984521   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:47.984565   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:47.984565   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:47.984565   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:47 GMT
	I0421 18:47:47.984703   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:47.985353   14120 pod_ready.go:92] pod "kube-scheduler-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:47.985353   14120 pod_ready.go:81] duration metric: took 1.525885s for pod "kube-scheduler-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:47.985439   14120 pod_ready.go:38] duration metric: took 11.6309713s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:47:47.985439   14120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 18:47:48.010646   14120 command_runner.go:130] > -16
	I0421 18:47:48.010725   14120 ops.go:34] apiserver oom_adj: -16
	I0421 18:47:48.010810   14120 kubeadm.go:591] duration metric: took 24.2879783s to restartPrimaryControlPlane
	I0421 18:47:48.010810   14120 kubeadm.go:393] duration metric: took 24.3981393s to StartCluster
	I0421 18:47:48.010872   14120 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:47:48.010985   14120 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:47:48.012288   14120 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:47:48.013564   14120 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.199.19 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 18:47:48.013564   14120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 18:47:48.017603   14120 out.go:177] * Verifying Kubernetes components...
	I0421 18:47:48.013564   14120 addons.go:69] Setting storage-provisioner=true in profile "functional-808300"
	I0421 18:47:48.013564   14120 addons.go:69] Setting default-storageclass=true in profile "functional-808300"
	I0421 18:47:48.014181   14120 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 18:47:48.020313   14120 addons.go:234] Setting addon storage-provisioner=true in "functional-808300"
	W0421 18:47:48.020313   14120 addons.go:243] addon storage-provisioner should already be in state true
	I0421 18:47:48.020313   14120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-808300"
	I0421 18:47:48.020313   14120 host.go:66] Checking if "functional-808300" exists ...
	I0421 18:47:48.021866   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:47:48.022691   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:47:48.036661   14120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 18:47:48.361543   14120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 18:47:48.391339   14120 node_ready.go:35] waiting up to 6m0s for node "functional-808300" to be "Ready" ...
	I0421 18:47:48.391608   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:48.391662   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:48.391662   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:48.391662   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:48.394006   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:48.394006   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:48.394006   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:48.394006   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:48 GMT
	I0421 18:47:48.394006   14120 round_trippers.go:580]     Audit-Id: f09769ec-fb4d-4e70-a0a6-3ff5c6757fd3
	I0421 18:47:48.394006   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:48.394006   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:48.394006   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:48.395027   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:48.395523   14120 node_ready.go:49] node "functional-808300" has status "Ready":"True"
	I0421 18:47:48.395523   14120 node_ready.go:38] duration metric: took 4.1079ms for node "functional-808300" to be "Ready" ...
	I0421 18:47:48.395523   14120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:47:48.395523   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods
	I0421 18:47:48.395694   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:48.395694   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:48.395751   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:48.400252   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:48.400252   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:48.400252   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:48.400252   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:48.400252   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:48 GMT
	I0421 18:47:48.400252   14120 round_trippers.go:580]     Audit-Id: a72d12b9-b7f1-4151-94ff-615a144d2148
	I0421 18:47:48.400252   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:48.400252   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:48.401530   14120 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"590"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-g2fk9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75","resourceVersion":"524","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8ca42f95-77b3-4d85-90ae-7995d5546e14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ca42f95-77b3-4d85-90ae-7995d5546e14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50822 chars]
	I0421 18:47:48.404347   14120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g2fk9" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:48.404542   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g2fk9
	I0421 18:47:48.404542   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:48.404542   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:48.404542   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:48.411249   14120 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 18:47:48.411249   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:48.411249   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:48 GMT
	I0421 18:47:48.411249   14120 round_trippers.go:580]     Audit-Id: b0b499c1-3d6f-40dd-92e8-a36bbbf99e77
	I0421 18:47:48.411249   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:48.411797   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:48.411797   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:48.411797   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:48.417985   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-g2fk9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75","resourceVersion":"524","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8ca42f95-77b3-4d85-90ae-7995d5546e14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ca42f95-77b3-4d85-90ae-7995d5546e14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0421 18:47:48.418801   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:48.418801   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:48.419160   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:48.419160   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:48.423159   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:48.423277   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:48.423277   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:48 GMT
	I0421 18:47:48.423277   14120 round_trippers.go:580]     Audit-Id: a40d7ac7-b1f9-4a88-8214-311d80a49ae0
	I0421 18:47:48.423393   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:48.423393   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:48.423393   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:48.423393   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:48.423932   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:48.424599   14120 pod_ready.go:92] pod "coredns-7db6d8ff4d-g2fk9" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:48.424599   14120 pod_ready.go:81] duration metric: took 20.141ms for pod "coredns-7db6d8ff4d-g2fk9" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:48.424701   14120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:48.424876   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/etcd-functional-808300
	I0421 18:47:48.424876   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:48.424876   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:48.424876   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:48.429937   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:48.429937   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:48.429937   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:48.429937   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:48.429937   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:48.429937   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:48.429937   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:48 GMT
	I0421 18:47:48.429937   14120 round_trippers.go:580]     Audit-Id: 6a4d229c-b934-474e-bf1e-2e241119168a
	I0421 18:47:48.429937   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-808300","namespace":"kube-system","uid":"0426dc0d-4f18-437a-a64d-213be30ceae3","resourceVersion":"581","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.199.19:2379","kubernetes.io/config.hash":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.mirror":"82a913c814132a6f4702256b56ae1504","kubernetes.io/config.seen":"2024-04-21T18:44:48.155606790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6369 chars]
	I0421 18:47:48.617254   14120 request.go:629] Waited for 185.7979ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:48.617554   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:48.617707   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:48.617707   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:48.617707   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:48.624977   14120 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 18:47:48.624977   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:48.625520   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:48.625520   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:48.625520   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:48.625520   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:48.625520   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:48 GMT
	I0421 18:47:48.625520   14120 round_trippers.go:580]     Audit-Id: 139c230a-adeb-4361-ae4a-b14eb36b85c4
	I0421 18:47:48.625831   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:48.626972   14120 pod_ready.go:92] pod "etcd-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:48.627093   14120 pod_ready.go:81] duration metric: took 202.3394ms for pod "etcd-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:48.627093   14120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:48.823850   14120 request.go:629] Waited for 196.3369ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300
	I0421 18:47:48.824043   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-808300
	I0421 18:47:48.824133   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:48.824133   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:48.824260   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:48.828560   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:48.828560   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:48.828560   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:48.828560   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:48.828560   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:48.828560   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:48 GMT
	I0421 18:47:48.828560   14120 round_trippers.go:580]     Audit-Id: 1e750f6e-20fe-4b6e-bda3-b673f9d3d6d4
	I0421 18:47:48.829419   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:48.829798   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-808300","namespace":"kube-system","uid":"6c8fa5ce-1fde-446e-a0c9-a204acb6dd7f","resourceVersion":"578","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.199.19:8441","kubernetes.io/config.hash":"0b7bdac9b6749d02966446979275dc66","kubernetes.io/config.mirror":"0b7bdac9b6749d02966446979275dc66","kubernetes.io/config.seen":"2024-04-21T18:44:48.155612590Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8154 chars]
	I0421 18:47:49.015338   14120 request.go:629] Waited for 184.6024ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:49.015772   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:49.015772   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:49.015772   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:49.015772   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:49.021001   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:49.021001   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:49.021001   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:49 GMT
	I0421 18:47:49.021001   14120 round_trippers.go:580]     Audit-Id: 79030ee7-5295-4dba-94af-94790d248fc3
	I0421 18:47:49.021001   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:49.021001   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:49.021219   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:49.021219   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:49.021856   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:49.022175   14120 pod_ready.go:92] pod "kube-apiserver-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:49.022175   14120 pod_ready.go:81] duration metric: took 395.0795ms for pod "kube-apiserver-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:49.022175   14120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:49.217416   14120 request.go:629] Waited for 195.1106ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-808300
	I0421 18:47:49.217662   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-808300
	I0421 18:47:49.217662   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:49.217662   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:49.217662   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:49.221585   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:49.221585   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:49.221585   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:49.222090   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:49 GMT
	I0421 18:47:49.222090   14120 round_trippers.go:580]     Audit-Id: f6fea6ef-2dc9-413c-ab92-9c0012a1c0b4
	I0421 18:47:49.222090   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:49.222090   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:49.222090   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:49.222153   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-808300","namespace":"kube-system","uid":"f66b9bfd-321d-458e-b897-b4d57a3a419e","resourceVersion":"583","creationTimestamp":"2024-04-21T18:44:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"34663a7eebab7481865a01efa53e584b","kubernetes.io/config.mirror":"34663a7eebab7481865a01efa53e584b","kubernetes.io/config.seen":"2024-04-21T18:44:39.372917737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7698 chars]
	I0421 18:47:49.424809   14120 request.go:629] Waited for 200.7978ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:49.424869   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:49.424869   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:49.424869   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:49.424869   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:49.428458   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:49.428975   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:49.428975   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:49.428975   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:49.428975   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:49 GMT
	I0421 18:47:49.428975   14120 round_trippers.go:580]     Audit-Id: 0ebb7819-c1a4-4609-9370-cc474448ac49
	I0421 18:47:49.428975   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:49.428975   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:49.430012   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:49.430714   14120 pod_ready.go:92] pod "kube-controller-manager-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:49.430714   14120 pod_ready.go:81] duration metric: took 408.5364ms for pod "kube-controller-manager-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:49.430714   14120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r68j6" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:49.615485   14120 request.go:629] Waited for 184.6191ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-proxy-r68j6
	I0421 18:47:49.615654   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-proxy-r68j6
	I0421 18:47:49.615654   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:49.615654   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:49.615654   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:49.619241   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:49.619241   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:49.619241   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:49.619241   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:49 GMT
	I0421 18:47:49.619241   14120 round_trippers.go:580]     Audit-Id: 85d3d011-7b7b-402c-914e-a7b58abaea58
	I0421 18:47:49.619241   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:49.619677   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:49.619677   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:49.619867   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r68j6","generateName":"kube-proxy-","namespace":"kube-system","uid":"343cddb9-92cd-4313-a597-cce17924b2d7","resourceVersion":"525","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c987bea6-3de3-42e8-bd1c-08710108f0e3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c987bea6-3de3-42e8-bd1c-08710108f0e3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0421 18:47:49.820495   14120 request.go:629] Waited for 199.9637ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:49.820495   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:49.820495   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:49.820495   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:49.820495   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:49.824075   14120 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 18:47:49.824075   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:49.824075   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:49.824075   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:49.824075   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:49 GMT
	I0421 18:47:49.825012   14120 round_trippers.go:580]     Audit-Id: bc807d8f-f51b-4631-8ec8-b9e391df8479
	I0421 18:47:49.825012   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:49.825012   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:49.825510   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:49.826257   14120 pod_ready.go:92] pod "kube-proxy-r68j6" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:49.826257   14120 pod_ready.go:81] duration metric: took 395.5401ms for pod "kube-proxy-r68j6" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:49.826257   14120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:50.011863   14120 request.go:629] Waited for 185.6049ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0421 18:47:50.011863   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-808300
	I0421 18:47:50.011863   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:50.011863   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:50.011863   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:50.016445   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:50.016445   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:50.016445   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:50.016445   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:50.016445   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:50.016445   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:50 GMT
	I0421 18:47:50.016445   14120 round_trippers.go:580]     Audit-Id: 85180f0b-488f-4997-9f1b-c0ffaafbe009
	I0421 18:47:50.016445   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:50.016445   14120 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-808300","namespace":"kube-system","uid":"7f2bc37e-7207-463e-9444-f360c05fdbbc","resourceVersion":"590","creationTimestamp":"2024-04-21T18:44:48Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.mirror":"e9536eb799ba9d9fd3958f4f20ee4ab3","kubernetes.io/config.seen":"2024-04-21T18:44:48.155614890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:44:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5454 chars]
	I0421 18:47:50.216447   14120 request.go:629] Waited for 198.9253ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:50.216523   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes/functional-808300
	I0421 18:47:50.216615   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:50.216615   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:50.216615   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:50.221209   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:50.221209   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:50.222036   14120 round_trippers.go:580]     Audit-Id: dfa63b82-f4c1-4e67-8b5c-2417b0813f5d
	I0421 18:47:50.222036   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:50.222036   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:50.222036   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:50.222036   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:50.222102   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:50 GMT
	I0421 18:47:50.222444   14120 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-04-21T18:44:44Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0421 18:47:50.223025   14120 pod_ready.go:92] pod "kube-scheduler-functional-808300" in "kube-system" namespace has status "Ready":"True"
	I0421 18:47:50.223025   14120 pod_ready.go:81] duration metric: took 396.7653ms for pod "kube-scheduler-functional-808300" in "kube-system" namespace to be "Ready" ...
	I0421 18:47:50.223025   14120 pod_ready.go:38] duration metric: took 1.8274892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 18:47:50.223025   14120 api_server.go:52] waiting for apiserver process to appear ...
	I0421 18:47:50.238138   14120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 18:47:50.268923   14120 command_runner.go:130] > 6049
	I0421 18:47:50.268923   14120 api_server.go:72] duration metric: took 2.2553427s to wait for apiserver process to appear ...
	I0421 18:47:50.268923   14120 api_server.go:88] waiting for apiserver healthz status ...
	I0421 18:47:50.268923   14120 api_server.go:253] Checking apiserver healthz at https://172.27.199.19:8441/healthz ...
	I0421 18:47:50.277197   14120 api_server.go:279] https://172.27.199.19:8441/healthz returned 200:
	ok
	I0421 18:47:50.277197   14120 round_trippers.go:463] GET https://172.27.199.19:8441/version
	I0421 18:47:50.277197   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:50.277197   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:50.277197   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:50.279509   14120 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 18:47:50.279509   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:50.279509   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:50.279509   14120 round_trippers.go:580]     Content-Length: 263
	I0421 18:47:50.279579   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:50 GMT
	I0421 18:47:50.279579   14120 round_trippers.go:580]     Audit-Id: e8deceaf-8a12-42ee-a3d4-a8ece29fa77c
	I0421 18:47:50.279579   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:50.279579   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:50.279579   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:50.279579   14120 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0421 18:47:50.279579   14120 api_server.go:141] control plane version: v1.30.0
	I0421 18:47:50.279579   14120 api_server.go:131] duration metric: took 10.6567ms to wait for apiserver health ...
	I0421 18:47:50.279579   14120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 18:47:50.311153   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:47:50.311153   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:50.311743   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:47:50.312050   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:50.320765   14120 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 18:47:50.312783   14120 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:47:50.321378   14120 kapi.go:59] client config for functional-808300: &rest.Config{Host:"https://172.27.199.19:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-808300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-808300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 18:47:50.325916   14120 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:47:50.325916   14120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 18:47:50.325916   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:47:50.326177   14120 addons.go:234] Setting addon default-storageclass=true in "functional-808300"
	W0421 18:47:50.326177   14120 addons.go:243] addon default-storageclass should already be in state true
	I0421 18:47:50.326177   14120 host.go:66] Checking if "functional-808300" exists ...
	I0421 18:47:50.327180   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:47:50.419033   14120 request.go:629] Waited for 139.1089ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods
	I0421 18:47:50.419156   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods
	I0421 18:47:50.419246   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:50.419303   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:50.419349   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:50.440537   14120 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0421 18:47:50.440537   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:50.440537   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:50.440537   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:50.440537   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:50 GMT
	I0421 18:47:50.440537   14120 round_trippers.go:580]     Audit-Id: 97ee589a-b4b8-4c10-ac79-f632862dc3ac
	I0421 18:47:50.440537   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:50.440537   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:50.442239   14120 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"590"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-g2fk9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75","resourceVersion":"524","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8ca42f95-77b3-4d85-90ae-7995d5546e14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ca42f95-77b3-4d85-90ae-7995d5546e14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50822 chars]
	I0421 18:47:50.444559   14120 system_pods.go:59] 7 kube-system pods found
	I0421 18:47:50.444559   14120 system_pods.go:61] "coredns-7db6d8ff4d-g2fk9" [a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75] Running
	I0421 18:47:50.444559   14120 system_pods.go:61] "etcd-functional-808300" [0426dc0d-4f18-437a-a64d-213be30ceae3] Running
	I0421 18:47:50.444559   14120 system_pods.go:61] "kube-apiserver-functional-808300" [6c8fa5ce-1fde-446e-a0c9-a204acb6dd7f] Running
	I0421 18:47:50.444559   14120 system_pods.go:61] "kube-controller-manager-functional-808300" [f66b9bfd-321d-458e-b897-b4d57a3a419e] Running
	I0421 18:47:50.444559   14120 system_pods.go:61] "kube-proxy-r68j6" [343cddb9-92cd-4313-a597-cce17924b2d7] Running
	I0421 18:47:50.444559   14120 system_pods.go:61] "kube-scheduler-functional-808300" [7f2bc37e-7207-463e-9444-f360c05fdbbc] Running
	I0421 18:47:50.444559   14120 system_pods.go:61] "storage-provisioner" [24f4cf93-e486-46a0-89a5-a94fe4593b32] Running
	I0421 18:47:50.444559   14120 system_pods.go:74] duration metric: took 164.9783ms to wait for pod list to return data ...
	I0421 18:47:50.444559   14120 default_sa.go:34] waiting for default service account to be created ...
	I0421 18:47:50.623925   14120 request.go:629] Waited for 179.2192ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/namespaces/default/serviceaccounts
	I0421 18:47:50.624138   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/default/serviceaccounts
	I0421 18:47:50.624264   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:50.624264   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:50.624264   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:50.628607   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:50.628796   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:50.628796   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:50 GMT
	I0421 18:47:50.628796   14120 round_trippers.go:580]     Audit-Id: 578ed47f-43d0-49b3-8551-8aefed7d9159
	I0421 18:47:50.628796   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:50.628796   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:50.628796   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:50.628796   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:50.628796   14120 round_trippers.go:580]     Content-Length: 261
	I0421 18:47:50.628796   14120 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"590"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6ee71aa0-a0f0-4639-9c96-4c4dcbbd7ee1","resourceVersion":"313","creationTimestamp":"2024-04-21T18:45:02Z"}}]}
	I0421 18:47:50.629110   14120 default_sa.go:45] found service account: "default"
	I0421 18:47:50.629228   14120 default_sa.go:55] duration metric: took 184.5494ms for default service account to be created ...
	I0421 18:47:50.629228   14120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 18:47:50.812559   14120 request.go:629] Waited for 182.9248ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods
	I0421 18:47:50.812649   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/namespaces/kube-system/pods
	I0421 18:47:50.812772   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:50.812772   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:50.812772   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:50.818265   14120 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 18:47:50.818265   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:50.818265   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:50.818265   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:50.818265   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:50.818265   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:50 GMT
	I0421 18:47:50.819204   14120 round_trippers.go:580]     Audit-Id: faff24aa-c3b8-466e-bc06-0757351d3675
	I0421 18:47:50.819204   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:50.820344   14120 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"590"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-g2fk9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75","resourceVersion":"524","creationTimestamp":"2024-04-21T18:45:02Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"8ca42f95-77b3-4d85-90ae-7995d5546e14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T18:45:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ca42f95-77b3-4d85-90ae-7995d5546e14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50822 chars]
	I0421 18:47:50.824482   14120 system_pods.go:86] 7 kube-system pods found
	I0421 18:47:50.824549   14120 system_pods.go:89] "coredns-7db6d8ff4d-g2fk9" [a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75] Running
	I0421 18:47:50.824549   14120 system_pods.go:89] "etcd-functional-808300" [0426dc0d-4f18-437a-a64d-213be30ceae3] Running
	I0421 18:47:50.824549   14120 system_pods.go:89] "kube-apiserver-functional-808300" [6c8fa5ce-1fde-446e-a0c9-a204acb6dd7f] Running
	I0421 18:47:50.824680   14120 system_pods.go:89] "kube-controller-manager-functional-808300" [f66b9bfd-321d-458e-b897-b4d57a3a419e] Running
	I0421 18:47:50.824680   14120 system_pods.go:89] "kube-proxy-r68j6" [343cddb9-92cd-4313-a597-cce17924b2d7] Running
	I0421 18:47:50.824730   14120 system_pods.go:89] "kube-scheduler-functional-808300" [7f2bc37e-7207-463e-9444-f360c05fdbbc] Running
	I0421 18:47:50.824730   14120 system_pods.go:89] "storage-provisioner" [24f4cf93-e486-46a0-89a5-a94fe4593b32] Running
	I0421 18:47:50.824730   14120 system_pods.go:126] duration metric: took 195.5003ms to wait for k8s-apps to be running ...
	I0421 18:47:50.824862   14120 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 18:47:50.841294   14120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 18:47:50.870157   14120 system_svc.go:56] duration metric: took 45.3423ms WaitForService to wait for kubelet
	I0421 18:47:50.870157   14120 kubeadm.go:576] duration metric: took 2.8565726s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 18:47:50.870256   14120 node_conditions.go:102] verifying NodePressure condition ...
	I0421 18:47:51.017899   14120 request.go:629] Waited for 147.5699ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.199.19:8441/api/v1/nodes
	I0421 18:47:51.018202   14120 round_trippers.go:463] GET https://172.27.199.19:8441/api/v1/nodes
	I0421 18:47:51.018305   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:51.018344   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:51.018344   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:51.023830   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:51.023830   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:51.023912   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:51.023959   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:51 GMT
	I0421 18:47:51.023959   14120 round_trippers.go:580]     Audit-Id: 3355ea3e-41fb-4732-9ed3-017c778d9452
	I0421 18:47:51.024035   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:51.024092   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:51.024092   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:51.024484   14120 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"590"},"items":[{"metadata":{"name":"functional-808300","uid":"51c0d7cd-cf18-440c-bd0d-c43c0be5b815","resourceVersion":"507","creationTimestamp":"2024-04-21T18:44:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-808300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"functional-808300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T18_44_48_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0421 18:47:51.025465   14120 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 18:47:51.025465   14120 node_conditions.go:123] node cpu capacity is 2
	I0421 18:47:51.025465   14120 node_conditions.go:105] duration metric: took 155.2084ms to run NodePressure ...
	I0421 18:47:51.025465   14120 start.go:240] waiting for startup goroutines ...
	I0421 18:47:52.562795   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:47:52.563019   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:52.563202   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:47:52.582618   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:47:52.582618   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:52.583582   14120 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 18:47:52.583582   14120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 18:47:52.583700   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
	I0421 18:47:54.821974   14120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 18:47:54.821974   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:54.821974   14120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
	I0421 18:47:55.247463   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:47:55.247463   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:55.248025   14120 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0421 18:47:55.404634   14120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 18:47:56.380873   14120 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0421 18:47:56.380873   14120 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0421 18:47:56.380873   14120 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0421 18:47:56.380873   14120 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0421 18:47:56.380873   14120 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0421 18:47:56.380873   14120 command_runner.go:130] > pod/storage-provisioner configured
	I0421 18:47:57.469277   14120 main.go:141] libmachine: [stdout =====>] : 172.27.199.19
	
	I0421 18:47:57.469876   14120 main.go:141] libmachine: [stderr =====>] : 
	I0421 18:47:57.470632   14120 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
	I0421 18:47:57.606432   14120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 18:47:57.801274   14120 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0421 18:47:57.801274   14120 round_trippers.go:463] GET https://172.27.199.19:8441/apis/storage.k8s.io/v1/storageclasses
	I0421 18:47:57.801274   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:57.801274   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:57.801274   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:57.805885   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:57.806754   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:57.806754   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:57.806754   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:57.806754   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:57.806754   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:57.806754   14120 round_trippers.go:580]     Content-Length: 1273
	I0421 18:47:57.806754   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:57 GMT
	I0421 18:47:57.806754   14120 round_trippers.go:580]     Audit-Id: 6b4b14ee-4704-4dab-b751-9ff8076b3b9e
	I0421 18:47:57.806850   14120 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"597"},"items":[{"metadata":{"name":"standard","uid":"3157c7c6-5e69-4bf9-846e-9f49f1b458b7","resourceVersion":"398","creationTimestamp":"2024-04-21T18:45:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-21T18:45:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0421 18:47:57.807450   14120 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3157c7c6-5e69-4bf9-846e-9f49f1b458b7","resourceVersion":"398","creationTimestamp":"2024-04-21T18:45:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-21T18:45:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0421 18:47:57.807450   14120 round_trippers.go:463] PUT https://172.27.199.19:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0421 18:47:57.807450   14120 round_trippers.go:469] Request Headers:
	I0421 18:47:57.807450   14120 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 18:47:57.807450   14120 round_trippers.go:473]     Accept: application/json, */*
	I0421 18:47:57.807450   14120 round_trippers.go:473]     Content-Type: application/json
	I0421 18:47:57.812041   14120 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 18:47:57.812041   14120 round_trippers.go:577] Response Headers:
	I0421 18:47:57.812041   14120 round_trippers.go:580]     Content-Length: 1220
	I0421 18:47:57.812041   14120 round_trippers.go:580]     Date: Sun, 21 Apr 2024 18:47:57 GMT
	I0421 18:47:57.812041   14120 round_trippers.go:580]     Audit-Id: 45379820-5655-48cc-a150-e83ddf68d048
	I0421 18:47:57.812041   14120 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 18:47:57.812041   14120 round_trippers.go:580]     Content-Type: application/json
	I0421 18:47:57.812041   14120 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 60829007-7ddc-4f08-9238-1aa9ee8c0e65
	I0421 18:47:57.812041   14120 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6b7e8184-7635-479a-b843-90faafcb8098
	I0421 18:47:57.812356   14120 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3157c7c6-5e69-4bf9-846e-9f49f1b458b7","resourceVersion":"398","creationTimestamp":"2024-04-21T18:45:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-21T18:45:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0421 18:47:57.815152   14120 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0421 18:47:57.819174   14120 addons.go:505] duration metric: took 9.8055408s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0421 18:47:57.819174   14120 start.go:245] waiting for cluster config update ...
	I0421 18:47:57.819174   14120 start.go:254] writing updated cluster config ...
	I0421 18:47:57.832024   14120 ssh_runner.go:195] Run: rm -f paused
	I0421 18:47:57.990221   14120 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 18:47:57.994749   14120 out.go:177] * Done! kubectl is now configured to use "functional-808300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.829989207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.830086706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.920288500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.920454099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.920476299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.920902195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.928075839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.933538396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.933560196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:34 functional-808300 dockerd[4284]: time="2024-04-21T18:47:34.933902194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:35 functional-808300 cri-dockerd[4525]: time="2024-04-21T18:47:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/034b89579ac5cd83def0b78f8a635b982a3c121177a090a71c8561701e539743/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 18:47:35 functional-808300 cri-dockerd[4525]: time="2024-04-21T18:47:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7ae6762292fa4cd7891b8c895a0dc40094de367e76204d7b5bee8ae13afaeafa/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.374058901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.374384699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.374523998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.374997994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:35 functional-808300 cri-dockerd[4525]: time="2024-04-21T18:47:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c2387e68a10fcb3adf9abaff446520cc014fa01fe5bf56d82a88fe6a8cb4c31b/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.679712581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.679929879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.680038879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.680198177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.900889065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.901097063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.901147663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 18:47:35 functional-808300 dockerd[4284]: time="2024-04-21T18:47:35.901353261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	483ca4739e846       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   c2387e68a10fc       coredns-7db6d8ff4d-g2fk9
	8da4fc37600d9       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   7ae6762292fa4       storage-provisioner
	cbb246c31d083       a0bf559e280cf       2 minutes ago       Running             kube-proxy                2                   034b89579ac5c       kube-proxy-r68j6
	76aa32367c26c       259c8277fcbbc       2 minutes ago       Running             kube-scheduler            2                   b641ba53e9aa6       kube-scheduler-functional-808300
	08b7c3eabdf5c       c42f13656d0b2       2 minutes ago       Running             kube-apiserver            2                   a2c5364f1d74c       kube-apiserver-functional-808300
	aa1e626de1fd5       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   eccf72bd4ec05       etcd-functional-808300
	f1bfedf1d7fb1       c7aad43836fa5       2 minutes ago       Running             kube-controller-manager   2                   c5924da49e624       kube-controller-manager-functional-808300
	47f8ae1f57fd4       259c8277fcbbc       2 minutes ago       Created             kube-scheduler            1                   ac9ce1b9a1c61       kube-scheduler-functional-808300
	d697d660a8f7e       c42f13656d0b2       2 minutes ago       Created             kube-apiserver            1                   12ddfb2f2a47b       kube-apiserver-functional-808300
	f3cd92f6f6fa4       c7aad43836fa5       2 minutes ago       Created             kube-controller-manager   1                   e3c37cf695830       kube-controller-manager-functional-808300
	36dee7208f354       3861cfcd7c04c       2 minutes ago       Exited              etcd                      1                   4e5b0f820bc71       etcd-functional-808300
	fd569f7642c5c       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       1                   42f418f31377c       storage-provisioner
	64339f31aff42       a0bf559e280cf       2 minutes ago       Exited              kube-proxy                1                   e5ce3ab7d2b36       kube-proxy-r68j6
	c5c3082312672       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   4843b87bfab9c       coredns-7db6d8ff4d-g2fk9
	
	
	==> coredns [483ca4739e84] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = de98fc7f457fe7491daa3d9d76be7f71a9abd25d984af2a62bb46996cb08a67c43ff1cd584a40d0f2ba65c174ae12856de0f6ecf594962c6086de5d30a624a4a
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33548 - 46043 "HINFO IN 3168473865419006316.1749186328485280056. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033640427s
	
	
	==> coredns [c5c308231267] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1179654439]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:45:05.002) (total time: 30001ms):
	Trace[1179654439]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:45:35.003)
	Trace[1179654439]: [30.00132606s] [30.00132606s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[761200082]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:45:05.002) (total time: 30002ms):
	Trace[761200082]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:45:35.003)
	Trace[761200082]: [30.002147147s] [30.002147147s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[726794201]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (21-Apr-2024 18:45:05.003) (total time: 30002ms):
	Trace[726794201]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:45:35.004)
	Trace[726794201]: [30.002070822s] [30.002070822s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = de98fc7f457fe7491daa3d9d76be7f71a9abd25d984af2a62bb46996cb08a67c43ff1cd584a40d0f2ba65c174ae12856de0f6ecf594962c6086de5d30a624a4a
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51574 - 8329 "HINFO IN 8128627570823120335.5912662709733510774. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025736983s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-808300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-808300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=functional-808300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T18_44_48_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 18:44:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-808300
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 18:49:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 18:49:36 +0000   Sun, 21 Apr 2024 18:44:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 18:49:36 +0000   Sun, 21 Apr 2024 18:44:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 18:49:36 +0000   Sun, 21 Apr 2024 18:44:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 18:49:36 +0000   Sun, 21 Apr 2024 18:44:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.199.19
	  Hostname:    functional-808300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 9019ca42df2a480ba92917c57c25cda0
	  System UUID:                985ac13e-edfa-df4c-b91a-6d33adf11e07
	  Boot ID:                    75f720f8-edf3-4b81-adb3-9353fd3a3bc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-g2fk9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m44s
	  kube-system                 etcd-functional-808300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-apiserver-functional-808300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-functional-808300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-r68j6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-scheduler-functional-808300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m42s                  kube-proxy       
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m58s                  kubelet          Node functional-808300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s                  kubelet          Node functional-808300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s                  kubelet          Node functional-808300 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m58s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m55s                  kubelet          Node functional-808300 status is now: NodeReady
	  Normal  RegisteredNode           4m45s                  node-controller  Node functional-808300 event: Registered Node functional-808300 in Controller
	  Normal  Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s (x8 over 2m18s)  kubelet          Node functional-808300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s (x8 over 2m18s)  kubelet          Node functional-808300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s (x7 over 2m18s)  kubelet          Node functional-808300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                     node-controller  Node functional-808300 event: Registered Node functional-808300 in Controller
	
	
	==> dmesg <==
	[  +0.747493] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +8.027952] systemd-fstab-generator[1739]: Ignoring "noauto" option for root device
	[  +0.118393] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.046069] systemd-fstab-generator[2143]: Ignoring "noauto" option for root device
	[  +0.152799] kauditd_printk_skb: 62 callbacks suppressed
	[Apr21 18:45] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[  +0.236946] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.232279] kauditd_printk_skb: 88 callbacks suppressed
	[ +31.985068] kauditd_printk_skb: 10 callbacks suppressed
	[Apr21 18:47] systemd-fstab-generator[3796]: Ignoring "noauto" option for root device
	[  +0.752173] systemd-fstab-generator[3832]: Ignoring "noauto" option for root device
	[  +0.320576] systemd-fstab-generator[3844]: Ignoring "noauto" option for root device
	[  +0.344568] systemd-fstab-generator[3858]: Ignoring "noauto" option for root device
	[  +5.379779] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.164622] systemd-fstab-generator[4473]: Ignoring "noauto" option for root device
	[  +0.229206] systemd-fstab-generator[4485]: Ignoring "noauto" option for root device
	[  +0.231806] systemd-fstab-generator[4497]: Ignoring "noauto" option for root device
	[  +0.369824] systemd-fstab-generator[4512]: Ignoring "noauto" option for root device
	[  +0.983520] systemd-fstab-generator[4668]: Ignoring "noauto" option for root device
	[  +0.144489] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.675483] systemd-fstab-generator[5736]: Ignoring "noauto" option for root device
	[  +0.165427] kauditd_printk_skb: 82 callbacks suppressed
	[  +7.055232] kauditd_printk_skb: 42 callbacks suppressed
	[ +11.757219] kauditd_printk_skb: 29 callbacks suppressed
	[  +1.450317] systemd-fstab-generator[6603]: Ignoring "noauto" option for root device
	
	
	==> etcd [36dee7208f35] <==
	{"level":"info","ts":"2024-04-21T18:47:24.480844Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"24.271267ms"}
	{"level":"info","ts":"2024-04-21T18:47:24.512572Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-21T18:47:24.529373Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"fa6970edd43cc29f","local-member-id":"799b6492948521c4","commit-index":543}
	{"level":"info","ts":"2024-04-21T18:47:24.52946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-21T18:47:24.529494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 became follower at term 2"}
	{"level":"info","ts":"2024-04-21T18:47:24.529512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 799b6492948521c4 [peers: [], term: 2, commit: 543, applied: 0, lastindex: 543, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-21T18:47:24.537545Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-21T18:47:24.561162Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":505}
	{"level":"info","ts":"2024-04-21T18:47:24.569284Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-21T18:47:24.577163Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"799b6492948521c4","timeout":"7s"}
	{"level":"info","ts":"2024-04-21T18:47:24.577468Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"799b6492948521c4"}
	{"level":"info","ts":"2024-04-21T18:47:24.5775Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"799b6492948521c4","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-21T18:47:24.577912Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-21T18:47:24.578048Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T18:47:24.578077Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T18:47:24.578088Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T18:47:24.578316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 switched to configuration voters=(8762708080699187652)"}
	{"level":"info","ts":"2024-04-21T18:47:24.578371Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa6970edd43cc29f","local-member-id":"799b6492948521c4","added-peer-id":"799b6492948521c4","added-peer-peer-urls":["https://172.27.199.19:2380"]}
	{"level":"info","ts":"2024-04-21T18:47:24.578468Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa6970edd43cc29f","local-member-id":"799b6492948521c4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T18:47:24.578496Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T18:47:24.598183Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-21T18:47:24.598559Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"799b6492948521c4","initial-advertise-peer-urls":["https://172.27.199.19:2380"],"listen-peer-urls":["https://172.27.199.19:2380"],"advertise-client-urls":["https://172.27.199.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.199.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-21T18:47:24.598594Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-21T18:47:24.598957Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.199.19:2380"}
	{"level":"info","ts":"2024-04-21T18:47:24.598991Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.199.19:2380"}
	
	
	==> etcd [aa1e626de1fd] <==
	{"level":"info","ts":"2024-04-21T18:47:29.466488Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T18:47:29.466771Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-21T18:47:29.472575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 switched to configuration voters=(8762708080699187652)"}
	{"level":"info","ts":"2024-04-21T18:47:29.472687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa6970edd43cc29f","local-member-id":"799b6492948521c4","added-peer-id":"799b6492948521c4","added-peer-peer-urls":["https://172.27.199.19:2380"]}
	{"level":"info","ts":"2024-04-21T18:47:29.473026Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa6970edd43cc29f","local-member-id":"799b6492948521c4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T18:47:29.473063Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T18:47:29.50663Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-21T18:47:29.512969Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.199.19:2380"}
	{"level":"info","ts":"2024-04-21T18:47:29.519803Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.199.19:2380"}
	{"level":"info","ts":"2024-04-21T18:47:29.515372Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"799b6492948521c4","initial-advertise-peer-urls":["https://172.27.199.19:2380"],"listen-peer-urls":["https://172.27.199.19:2380"],"advertise-client-urls":["https://172.27.199.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.199.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-21T18:47:29.515406Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-21T18:47:30.462846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-21T18:47:30.463014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-21T18:47:30.463133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 received MsgPreVoteResp from 799b6492948521c4 at term 2"}
	{"level":"info","ts":"2024-04-21T18:47:30.4632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 became candidate at term 3"}
	{"level":"info","ts":"2024-04-21T18:47:30.463234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 received MsgVoteResp from 799b6492948521c4 at term 3"}
	{"level":"info","ts":"2024-04-21T18:47:30.463472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"799b6492948521c4 became leader at term 3"}
	{"level":"info","ts":"2024-04-21T18:47:30.463841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 799b6492948521c4 elected leader 799b6492948521c4 at term 3"}
	{"level":"info","ts":"2024-04-21T18:47:30.477171Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T18:47:30.487202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T18:47:30.477134Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"799b6492948521c4","local-member-attributes":"{Name:functional-808300 ClientURLs:[https://172.27.199.19:2379]}","request-path":"/0/members/799b6492948521c4/attributes","cluster-id":"fa6970edd43cc29f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T18:47:30.491622Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T18:47:30.492585Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T18:47:30.505489Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T18:47:30.52318Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.199.19:2379"}
	
	
	==> kernel <==
	 18:49:46 up 7 min,  0 users,  load average: 0.64, 0.97, 0.50
	Linux functional-808300 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [08b7c3eabdf5] <==
	I0421 18:47:33.811142       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0421 18:47:33.812700       1 shared_informer.go:320] Caches are synced for configmaps
	I0421 18:47:33.813055       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0421 18:47:33.813253       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0421 18:47:33.813638       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0421 18:47:33.813865       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0421 18:47:33.823963       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0421 18:47:33.825021       1 policy_source.go:224] refreshing policies
	I0421 18:47:33.825387       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0421 18:47:33.826018       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0421 18:47:33.827383       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0421 18:47:33.828062       1 aggregator.go:165] initial CRD sync complete...
	I0421 18:47:33.828279       1 autoregister_controller.go:141] Starting autoregister controller
	I0421 18:47:33.828546       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0421 18:47:33.828867       1 cache.go:39] Caches are synced for autoregister controller
	I0421 18:47:33.831241       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0421 18:47:34.663703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0421 18:47:35.309334       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.199.19]
	I0421 18:47:35.314115       1 controller.go:615] quota admission added evaluator for: endpoints
	I0421 18:47:35.336069       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0421 18:47:36.008202       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0421 18:47:36.097472       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0421 18:47:36.233404       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0421 18:47:36.309881       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0421 18:47:36.330174       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [d697d660a8f7] <==
	
	
	==> kube-controller-manager [f1bfedf1d7fb] <==
	I0421 18:47:46.624324       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0421 18:47:46.623997       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0421 18:47:46.630026       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0421 18:47:46.631257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.157301ms"
	I0421 18:47:46.632001       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0421 18:47:46.637088       1 shared_informer.go:320] Caches are synced for service account
	I0421 18:47:46.640506       1 shared_informer.go:320] Caches are synced for taint
	I0421 18:47:46.641045       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0421 18:47:46.641498       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0421 18:47:46.641986       1 shared_informer.go:320] Caches are synced for daemon sets
	I0421 18:47:46.645932       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-808300"
	I0421 18:47:46.646027       1 shared_informer.go:320] Caches are synced for PV protection
	I0421 18:47:46.646194       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0421 18:47:46.646327       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0421 18:47:46.653205       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0421 18:47:46.655303       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0421 18:47:46.685378       1 shared_informer.go:320] Caches are synced for stateful set
	I0421 18:47:46.703501       1 shared_informer.go:320] Caches are synced for attach detach
	I0421 18:47:46.704717       1 shared_informer.go:320] Caches are synced for disruption
	I0421 18:47:46.780943       1 shared_informer.go:320] Caches are synced for persistent volume
	I0421 18:47:46.804858       1 shared_informer.go:320] Caches are synced for resource quota
	I0421 18:47:46.835524       1 shared_informer.go:320] Caches are synced for resource quota
	I0421 18:47:47.276832       1 shared_informer.go:320] Caches are synced for garbage collector
	I0421 18:47:47.317961       1 shared_informer.go:320] Caches are synced for garbage collector
	I0421 18:47:47.318097       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [f3cd92f6f6fa] <==
	
	
	==> kube-proxy [64339f31aff4] <==
	I0421 18:47:23.929378       1 server_linux.go:69] "Using iptables proxy"
	E0421 18:47:23.935973       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-808300\": dial tcp 172.27.199.19:8441: connect: connection refused"
	
	
	==> kube-proxy [cbb246c31d08] <==
	I0421 18:47:35.800653       1 server_linux.go:69] "Using iptables proxy"
	I0421 18:47:35.855926       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.199.19"]
	I0421 18:47:36.037092       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 18:47:36.037431       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 18:47:36.038035       1 server_linux.go:165] "Using iptables Proxier"
	I0421 18:47:36.049899       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 18:47:36.051104       1 server.go:872] "Version info" version="v1.30.0"
	I0421 18:47:36.051672       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:47:36.054696       1 config.go:192] "Starting service config controller"
	I0421 18:47:36.056892       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 18:47:36.057701       1 config.go:101] "Starting endpoint slice config controller"
	I0421 18:47:36.058093       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 18:47:36.059559       1 config.go:319] "Starting node config controller"
	I0421 18:47:36.060415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 18:47:36.158856       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 18:47:36.159146       1 shared_informer.go:320] Caches are synced for service config
	I0421 18:47:36.161683       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [47f8ae1f57fd] <==
	
	
	==> kube-scheduler [76aa32367c26] <==
	I0421 18:47:32.342551       1 serving.go:380] Generated self-signed cert in-memory
	W0421 18:47:33.748964       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0421 18:47:33.749452       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 18:47:33.749656       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0421 18:47:33.749814       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0421 18:47:33.818635       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0421 18:47:33.818677       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 18:47:33.829544       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0421 18:47:33.829587       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0421 18:47:33.832630       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0421 18:47:33.832808       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0421 18:47:33.930179       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 18:47:33 functional-808300 kubelet[5742]: I0421 18:47:33.861568    5742 kubelet_node_status.go:112] "Node was previously registered" node="functional-808300"
	Apr 21 18:47:33 functional-808300 kubelet[5742]: I0421 18:47:33.862280    5742 kubelet_node_status.go:76] "Successfully registered node" node="functional-808300"
	Apr 21 18:47:33 functional-808300 kubelet[5742]: I0421 18:47:33.864684    5742 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 21 18:47:33 functional-808300 kubelet[5742]: I0421 18:47:33.866171    5742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 21 18:47:34 functional-808300 kubelet[5742]: I0421 18:47:34.077367    5742 apiserver.go:52] "Watching apiserver"
	Apr 21 18:47:34 functional-808300 kubelet[5742]: I0421 18:47:34.081491    5742 topology_manager.go:215] "Topology Admit Handler" podUID="343cddb9-92cd-4313-a597-cce17924b2d7" podNamespace="kube-system" podName="kube-proxy-r68j6"
	Apr 21 18:47:34 functional-808300 kubelet[5742]: I0421 18:47:34.081670    5742 topology_manager.go:215] "Topology Admit Handler" podUID="a90cc7f1-69e1-4f34-9bb1-1c8d97dffa75" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g2fk9"
	Apr 21 18:47:34 functional-808300 kubelet[5742]: I0421 18:47:34.081792    5742 topology_manager.go:215] "Topology Admit Handler" podUID="24f4cf93-e486-46a0-89a5-a94fe4593b32" podNamespace="kube-system" podName="storage-provisioner"
	Apr 21 18:47:34 functional-808300 kubelet[5742]: I0421 18:47:34.092956    5742 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 21 18:47:34 functional-808300 kubelet[5742]: I0421 18:47:34.125088    5742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/343cddb9-92cd-4313-a597-cce17924b2d7-lib-modules\") pod \"kube-proxy-r68j6\" (UID: \"343cddb9-92cd-4313-a597-cce17924b2d7\") " pod="kube-system/kube-proxy-r68j6"
	Apr 21 18:47:34 functional-808300 kubelet[5742]: I0421 18:47:34.125324    5742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/24f4cf93-e486-46a0-89a5-a94fe4593b32-tmp\") pod \"storage-provisioner\" (UID: \"24f4cf93-e486-46a0-89a5-a94fe4593b32\") " pod="kube-system/storage-provisioner"
	Apr 21 18:47:34 functional-808300 kubelet[5742]: I0421 18:47:34.125402    5742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/343cddb9-92cd-4313-a597-cce17924b2d7-xtables-lock\") pod \"kube-proxy-r68j6\" (UID: \"343cddb9-92cd-4313-a597-cce17924b2d7\") " pod="kube-system/kube-proxy-r68j6"
	Apr 21 18:47:35 functional-808300 kubelet[5742]: I0421 18:47:35.044987    5742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="034b89579ac5cd83def0b78f8a635b982a3c121177a090a71c8561701e539743"
	Apr 21 18:47:35 functional-808300 kubelet[5742]: I0421 18:47:35.293524    5742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae6762292fa4cd7891b8c895a0dc40094de367e76204d7b5bee8ae13afaeafa"
	Apr 21 18:47:35 functional-808300 kubelet[5742]: I0421 18:47:35.524666    5742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2387e68a10fcb3adf9abaff446520cc014fa01fe5bf56d82a88fe6a8cb4c31b"
	Apr 21 18:48:28 functional-808300 kubelet[5742]: E0421 18:48:28.273236    5742 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:48:28 functional-808300 kubelet[5742]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:48:28 functional-808300 kubelet[5742]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:48:28 functional-808300 kubelet[5742]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:48:28 functional-808300 kubelet[5742]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 18:49:28 functional-808300 kubelet[5742]: E0421 18:49:28.267701    5742 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 18:49:28 functional-808300 kubelet[5742]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 18:49:28 functional-808300 kubelet[5742]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 18:49:28 functional-808300 kubelet[5742]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 18:49:28 functional-808300 kubelet[5742]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [8da4fc37600d] <==
	I0421 18:47:35.911104       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0421 18:47:35.955675       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0421 18:47:35.956298       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0421 18:47:53.419221       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0421 18:47:53.420551       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-808300_6a42be83-0ba0-4391-892b-073722d49263!
	I0421 18:47:53.419817       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89406d66-a640-4d0a-b43a-b70ece6d1dd7", APIVersion:"v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-808300_6a42be83-0ba0-4391-892b-073722d49263 became leader
	I0421 18:47:53.520883       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-808300_6a42be83-0ba0-4391-892b-073722d49263!
	
	
	==> storage-provisioner [fd569f7642c5] <==
	I0421 18:47:23.816943       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0421 18:47:23.841053       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:49:38.635174    5536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-808300 -n functional-808300: (12.359784s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-808300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (35.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config unset cpus" to be -""- but got *"W0421 18:52:54.346735    8036 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 config get cpus: exit status 14 (270.926ms)

                                                
                                                
** stderr ** 
	W0421 18:52:54.674634   10108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0421 18:52:54.674634   10108 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0421 18:52:54.958672   12812 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config get cpus" to be -""- but got *"W0421 18:52:55.252366    4992 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config unset cpus" to be -""- but got *"W0421 18:52:55.532263    6680 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 config get cpus: exit status 14 (273.3561ms)

                                                
                                                
** stderr ** 
	W0421 18:52:55.811897    9192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-808300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0421 18:52:55.811897    9192 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 service --namespace=default --https --url hello-node: exit status 1 (15.0533956s)

                                                
                                                
** stderr ** 
	W0421 18:53:39.928229    7572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-808300 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url --format={{.IP}}: exit status 1 (15.0197507s)

                                                
                                                
** stderr ** 
	W0421 18:53:55.010044    9328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url: exit status 1 (15.0284644s)

                                                
                                                
** stderr ** 
	W0421 18:54:10.054052   12968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-808300 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (70.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-cmvt9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-cmvt9 -- sh -c "ping -c 1 172.27.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-cmvt9 -- sh -c "ping -c 1 172.27.192.1": exit status 1 (10.5650018s)

                                                
                                                
-- stdout --
	PING 172.27.192.1 (172.27.192.1): 56 data bytes
	
	--- 172.27.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 19:13:59.591169    2304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.27.192.1) from pod (busybox-fc5497c4f-cmvt9): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-nttt5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-nttt5 -- sh -c "ping -c 1 172.27.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-nttt5 -- sh -c "ping -c 1 172.27.192.1": exit status 1 (10.5612375s)

                                                
                                                
-- stdout --
	PING 172.27.192.1 (172.27.192.1): 56 data bytes
	
	--- 172.27.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 19:14:10.729355    8560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.27.192.1) from pod (busybox-fc5497c4f-nttt5): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-pnbbn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-pnbbn -- sh -c "ping -c 1 172.27.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-pnbbn -- sh -c "ping -c 1 172.27.192.1": exit status 1 (10.5700183s)

                                                
                                                
-- stdout --
	PING 172.27.192.1 (172.27.192.1): 56 data bytes
	
	--- 172.27.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 19:14:21.853992    4176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.27.192.1) from pod (busybox-fc5497c4f-pnbbn): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-736000 -n ha-736000: (12.730635s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 logs -n 25: (9.2160475s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-808300                    | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:56 UTC | 21 Apr 24 18:56 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-808300 image build -t     | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:56 UTC | 21 Apr 24 18:56 UTC |
	|         | localhost/my-image:functional-808300 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-808300 image ls           | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:56 UTC | 21 Apr 24 18:56 UTC |
	| delete  | -p functional-808300                 | functional-808300 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:00 UTC | 21 Apr 24 19:01 UTC |
	| start   | -p ha-736000 --wait=true             | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:01 UTC | 21 Apr 24 19:13 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- apply -f             | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- rollout status       | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- get pods -o          | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- get pods -o          | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-cmvt9 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-nttt5 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-pnbbn --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-cmvt9 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-nttt5 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-pnbbn --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-cmvt9 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-nttt5 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-pnbbn -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- get pods -o          | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC | 21 Apr 24 19:13 UTC |
	|         | busybox-fc5497c4f-cmvt9              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:13 UTC |                     |
	|         | busybox-fc5497c4f-cmvt9 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:14 UTC | 21 Apr 24 19:14 UTC |
	|         | busybox-fc5497c4f-nttt5              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:14 UTC |                     |
	|         | busybox-fc5497c4f-nttt5 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:14 UTC | 21 Apr 24 19:14 UTC |
	|         | busybox-fc5497c4f-pnbbn              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-736000 -- exec                 | ha-736000         | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:14 UTC |                     |
	|         | busybox-fc5497c4f-pnbbn -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.192.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:01:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:01:30.769155    5552 out.go:291] Setting OutFile to fd 720 ...
	I0421 19:01:30.769155    5552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:01:30.769155    5552 out.go:304] Setting ErrFile to fd 716...
	I0421 19:01:30.769155    5552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:01:30.796479    5552 out.go:298] Setting JSON to false
	I0421 19:01:30.799790    5552 start.go:129] hostinfo: {"hostname":"minikube6","uptime":11965,"bootTime":1713714124,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 19:01:30.800827    5552 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 19:01:30.808149    5552 out.go:177] * [ha-736000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 19:01:30.814674    5552 notify.go:220] Checking for updates...
	I0421 19:01:30.817436    5552 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:01:30.819945    5552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:01:30.822588    5552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 19:01:30.825285    5552 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:01:30.828109    5552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:01:30.831698    5552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:01:36.351157    5552 out.go:177] * Using the hyperv driver based on user configuration
	I0421 19:01:36.355841    5552 start.go:297] selected driver: hyperv
	I0421 19:01:36.355841    5552 start.go:901] validating driver "hyperv" against <nil>
	I0421 19:01:36.355841    5552 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:01:36.419031    5552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 19:01:36.420517    5552 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:01:36.420604    5552 cni.go:84] Creating CNI manager for ""
	I0421 19:01:36.420703    5552 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0421 19:01:36.420703    5552 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0421 19:01:36.420910    5552 start.go:340] cluster config:
	{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:01:36.421221    5552 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:01:36.427492    5552 out.go:177] * Starting "ha-736000" primary control-plane node in "ha-736000" cluster
	I0421 19:01:36.430007    5552 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 19:01:36.430007    5552 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 19:01:36.430007    5552 cache.go:56] Caching tarball of preloaded images
	I0421 19:01:36.430620    5552 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 19:01:36.431224    5552 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 19:01:36.431752    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:01:36.432130    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json: {Name:mkc8725b604d2f8b010420e709bf1023daa6f0a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:01:36.433503    5552 start.go:360] acquireMachinesLock for ha-736000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:01:36.433560    5552 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-736000"
	I0421 19:01:36.433560    5552 start.go:93] Provisioning new machine with config: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName
:ha-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:01:36.433560    5552 start.go:125] createHost starting for "" (driver="hyperv")
	I0421 19:01:36.436889    5552 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 19:01:36.436889    5552 start.go:159] libmachine.API.Create for "ha-736000" (driver="hyperv")
	I0421 19:01:36.436889    5552 client.go:168] LocalClient.Create starting
	I0421 19:01:36.437874    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 19:01:36.438457    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:01:36.438500    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:01:36.438593    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 19:01:36.438593    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:01:36.438593    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:01:36.438593    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 19:01:38.648114    5552 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 19:01:38.648114    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:38.648114    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 19:01:40.461059    5552 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 19:01:40.461059    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:40.461767    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:01:41.979624    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:01:41.980211    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:41.980407    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:01:45.668116    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:01:45.668116    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:45.672440    5552 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 19:01:46.241084    5552 main.go:141] libmachine: Creating SSH key...
	I0421 19:01:46.440119    5552 main.go:141] libmachine: Creating VM...
	I0421 19:01:46.440119    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:01:49.369367    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:01:49.370213    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:49.370213    5552 main.go:141] libmachine: Using switch "Default Switch"
	I0421 19:01:49.370213    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:01:51.197008    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:01:51.197217    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:51.197217    5552 main.go:141] libmachine: Creating VHD
	I0421 19:01:51.197398    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 19:01:54.903849    5552 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 359490DA-85DD-4A6F-B5CD-00C97E3B216B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 19:01:54.903849    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:54.903849    5552 main.go:141] libmachine: Writing magic tar header
	I0421 19:01:54.904187    5552 main.go:141] libmachine: Writing SSH key tar header
	I0421 19:01:54.916926    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 19:01:58.145501    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:01:58.145501    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:58.145501    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\disk.vhd' -SizeBytes 20000MB
	I0421 19:02:00.767785    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:00.767785    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:00.768637    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-736000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0421 19:02:05.143847    5552 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-736000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 19:02:05.143847    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:05.143847    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-736000 -DynamicMemoryEnabled $false
	I0421 19:02:07.444321    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:07.444321    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:07.444442    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-736000 -Count 2
	I0421 19:02:09.662853    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:09.662853    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:09.663131    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-736000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\boot2docker.iso'
	I0421 19:02:12.273575    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:12.273575    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:12.274348    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-736000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\disk.vhd'
	I0421 19:02:15.014777    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:15.015773    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:15.015773    5552 main.go:141] libmachine: Starting VM...
	I0421 19:02:15.015858    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-736000
	I0421 19:02:18.149400    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:18.149400    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:18.149400    5552 main.go:141] libmachine: Waiting for host to start...
	I0421 19:02:18.150286    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:20.424590    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:20.424590    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:20.424913    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:22.996247    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:22.996247    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:23.999503    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:26.240837    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:26.240837    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:26.240837    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:28.831004    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:28.831004    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:29.840201    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:32.034114    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:32.034114    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:32.034114    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:34.593331    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:34.593331    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:35.595834    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:37.803371    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:37.803371    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:37.804025    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:40.383218    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:40.383218    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:41.397866    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:43.634870    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:43.634870    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:43.635192    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:46.302872    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:02:46.302872    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:46.303086    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:48.497182    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:48.497245    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:48.497245    5552 machine.go:94] provisionDockerMachine start ...
	I0421 19:02:48.497245    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:50.686701    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:50.686725    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:50.686725    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:53.275882    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:02:53.276662    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:53.283075    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:02:53.296056    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:02:53.296056    5552 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:02:53.422702    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:02:53.422702    5552 buildroot.go:166] provisioning hostname "ha-736000"
	I0421 19:02:53.422702    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:55.576716    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:55.576716    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:55.577706    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:58.244225    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:02:58.244501    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:58.250965    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:02:58.251253    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:02:58.251253    5552 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-736000 && echo "ha-736000" | sudo tee /etc/hostname
	I0421 19:02:58.407008    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-736000
	
	I0421 19:02:58.407167    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:00.569167    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:00.569472    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:00.569472    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:03.155934    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:03.156583    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:03.163362    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:03.163362    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:03.163362    5552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-736000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-736000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-736000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:03:03.319082    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:03:03.319224    5552 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 19:03:03.319340    5552 buildroot.go:174] setting up certificates
	I0421 19:03:03.319340    5552 provision.go:84] configureAuth start
	I0421 19:03:03.319414    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:05.512506    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:05.512811    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:05.512811    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:08.083232    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:08.084138    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:08.084233    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:10.283567    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:10.283567    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:10.283751    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:12.941557    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:12.942342    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:12.942342    5552 provision.go:143] copyHostCerts
	I0421 19:03:12.942448    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 19:03:12.942448    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 19:03:12.942448    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 19:03:12.943162    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 19:03:12.943970    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 19:03:12.944611    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 19:03:12.944682    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 19:03:12.944772    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 19:03:12.945586    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 19:03:12.946349    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 19:03:12.946349    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 19:03:12.946349    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 19:03:12.947801    5552 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-736000 san=[127.0.0.1 172.27.203.42 ha-736000 localhost minikube]
	I0421 19:03:13.157449    5552 provision.go:177] copyRemoteCerts
	I0421 19:03:13.171734    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:03:13.171734    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:15.350114    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:15.350571    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:15.350631    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:17.956945    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:17.956945    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:17.958289    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:03:18.067148    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8953788s)
	I0421 19:03:18.067148    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 19:03:18.068300    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:03:18.118669    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 19:03:18.119202    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0421 19:03:18.169135    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 19:03:18.169621    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 19:03:18.220506    5552 provision.go:87] duration metric: took 14.9009391s to configureAuth
	I0421 19:03:18.220589    5552 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:03:18.221246    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:03:18.221353    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:20.393237    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:20.393237    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:20.393237    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:22.986829    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:22.986829    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:22.993119    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:22.993717    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:22.993717    5552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 19:03:23.123178    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 19:03:23.123347    5552 buildroot.go:70] root file system type: tmpfs
	I0421 19:03:23.123480    5552 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 19:03:23.123480    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:25.297965    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:25.297965    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:25.298376    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:27.908269    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:27.908816    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:27.917771    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:27.917771    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:27.918687    5552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 19:03:28.086293    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 19:03:28.086415    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:30.241996    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:30.241996    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:30.243013    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:32.865415    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:32.865415    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:32.874077    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:32.874077    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:32.874077    5552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 19:03:35.138753    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 19:03:35.138753    5552 machine.go:97] duration metric: took 46.6411776s to provisionDockerMachine
	I0421 19:03:35.138753    5552 client.go:171] duration metric: took 1m58.700076s to LocalClient.Create
	I0421 19:03:35.139299    5552 start.go:167] duration metric: took 1m58.7015668s to libmachine.API.Create "ha-736000"
	I0421 19:03:35.139443    5552 start.go:293] postStartSetup for "ha-736000" (driver="hyperv")
	I0421 19:03:35.139486    5552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:03:35.151604    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:03:35.151604    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:37.257505    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:37.258393    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:37.258393    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:39.854375    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:39.854375    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:39.854375    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:03:39.971809    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8201705s)
	I0421 19:03:39.985257    5552 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:03:39.993101    5552 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:03:39.993101    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 19:03:39.993646    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 19:03:39.993907    5552 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 19:03:39.994510    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 19:03:40.009904    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:03:40.036626    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 19:03:40.090082    5552 start.go:296] duration metric: took 4.9506043s for postStartSetup
	I0421 19:03:40.093073    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:42.281212    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:42.281212    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:42.281299    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:44.906602    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:44.907583    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:44.907583    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:03:44.910691    5552 start.go:128] duration metric: took 2m8.4759703s to createHost
	I0421 19:03:44.910842    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:47.046407    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:47.046407    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:47.046407    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:49.615625    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:49.615625    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:49.621538    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:49.621897    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:49.621897    5552 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:03:49.745934    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713726229.759913049
	
	I0421 19:03:49.745934    5552 fix.go:216] guest clock: 1713726229.759913049
	I0421 19:03:49.745934    5552 fix.go:229] Guest: 2024-04-21 19:03:49.759913049 +0000 UTC Remote: 2024-04-21 19:03:44.9107404 +0000 UTC m=+134.332875701 (delta=4.849172649s)
	I0421 19:03:49.745934    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:51.864353    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:51.864818    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:51.864894    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:54.502136    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:54.502136    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:54.508331    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:54.509135    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:54.509135    5552 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713726229
	I0421 19:03:54.657251    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 19:03:49 UTC 2024
	
	I0421 19:03:54.657323    5552 fix.go:236] clock set: Sun Apr 21 19:03:49 UTC 2024
	 (err=<nil>)
	I0421 19:03:54.657323    5552 start.go:83] releasing machines lock for "ha-736000", held for 2m18.2227814s
	I0421 19:03:54.657508    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:56.805638    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:56.805818    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:56.805897    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:59.454196    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:59.454246    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:59.458530    5552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:03:59.458736    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:59.470839    5552 ssh_runner.go:195] Run: cat /version.json
	I0421 19:03:59.471878    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:04:01.617429    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:04:01.617429    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:01.617429    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:04:01.660528    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:04:01.660629    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:01.660691    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:04:04.338128    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:04:04.338128    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:04.338370    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:04:04.363845    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:04:04.363845    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:04.364484    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:04:04.588205    5552 ssh_runner.go:235] Completed: cat /version.json: (5.1173293s)
	I0421 19:04:04.588205    5552 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1295592s)
	I0421 19:04:04.601494    5552 ssh_runner.go:195] Run: systemctl --version
	I0421 19:04:04.625794    5552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:04:04.635564    5552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:04:04.649566    5552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:04:04.682420    5552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:04:04.682420    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:04:04.682420    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:04:04.737471    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 19:04:04.776605    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 19:04:04.800286    5552 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 19:04:04.815490    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 19:04:04.859890    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:04:04.898481    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 19:04:04.937377    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:04:04.974608    5552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:04:05.011637    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 19:04:05.049390    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 19:04:05.087507    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 19:04:05.122971    5552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:04:05.158158    5552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:04:05.190111    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:05.409983    5552 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 19:04:05.448087    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:04:05.466371    5552 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 19:04:05.508677    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:04:05.549184    5552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:04:05.598844    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:04:05.638574    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:04:05.678738    5552 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 19:04:05.751004    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:04:05.778161    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:04:05.828941    5552 ssh_runner.go:195] Run: which cri-dockerd
	I0421 19:04:05.854363    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 19:04:05.875126    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 19:04:05.924396    5552 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 19:04:06.147509    5552 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 19:04:06.381492    5552 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 19:04:06.381720    5552 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 19:04:06.432949    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:06.657792    5552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 19:04:09.243873    5552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5860634s)
	I0421 19:04:09.259176    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 19:04:09.305872    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:04:09.350758    5552 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 19:04:09.586686    5552 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 19:04:09.819494    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:10.056078    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 19:04:10.110920    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:04:10.151889    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:10.408280    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 19:04:10.526327    5552 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 19:04:10.540833    5552 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 19:04:10.560112    5552 start.go:562] Will wait 60s for crictl version
	I0421 19:04:10.583646    5552 ssh_runner.go:195] Run: which crictl
	I0421 19:04:10.605954    5552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:04:10.670354    5552 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 19:04:10.683529    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:04:10.732566    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:04:10.772015    5552 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 19:04:10.772015    5552 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 19:04:10.778871    5552 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 19:04:10.778871    5552 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 19:04:10.778871    5552 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 19:04:10.778871    5552 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 19:04:10.781263    5552 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 19:04:10.781263    5552 ip.go:210] interface addr: 172.27.192.1/20
	I0421 19:04:10.795495    5552 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 19:04:10.803012    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:04:10.850333    5552 kubeadm.go:877] updating cluster {Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespac
e:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:04:10.850333    5552 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 19:04:10.859861    5552 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 19:04:10.884882    5552 docker.go:685] Got preloaded images: 
	I0421 19:04:10.884882    5552 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0421 19:04:10.898923    5552 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0421 19:04:10.936461    5552 ssh_runner.go:195] Run: which lz4
	I0421 19:04:10.943680    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0421 19:04:10.969878    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 19:04:10.978320    5552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:04:10.978554    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0421 19:04:13.083229    5552 docker.go:649] duration metric: took 2.1288251s to copy over tarball
	I0421 19:04:13.096507    5552 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 19:04:21.615659    5552 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5190921s)
	I0421 19:04:21.616198    5552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 19:04:21.703198    5552 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0421 19:04:21.723346    5552 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0421 19:04:21.769975    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:22.014696    5552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 19:04:25.404114    5552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3881643s)
	I0421 19:04:25.415866    5552 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 19:04:25.445212    5552 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0421 19:04:25.445307    5552 cache_images.go:84] Images are preloaded, skipping loading
	I0421 19:04:25.445307    5552 kubeadm.go:928] updating node { 172.27.203.42 8443 v1.30.0 docker true true} ...
	I0421 19:04:25.445475    5552 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-736000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.203.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:04:25.456052    5552 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0421 19:04:25.497804    5552 cni.go:84] Creating CNI manager for ""
	I0421 19:04:25.497933    5552 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 19:04:25.497933    5552 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 19:04:25.498039    5552 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.203.42 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-736000 NodeName:ha-736000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.203.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.203.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 19:04:25.498238    5552 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.203.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-736000"
	  kubeletExtraArgs:
	    node-ip: 172.27.203.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.203.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 19:04:25.498238    5552 kube-vip.go:111] generating kube-vip config ...
	I0421 19:04:25.513247    5552 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 19:04:25.542566    5552 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 19:04:25.542566    5552 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0421 19:04:25.556663    5552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 19:04:25.575644    5552 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 19:04:25.590879    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0421 19:04:25.610083    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0421 19:04:25.650466    5552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:04:25.688032    5552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0421 19:04:25.724582    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0421 19:04:25.776336    5552 ssh_runner.go:195] Run: grep 172.27.207.254	control-plane.minikube.internal$ /etc/hosts
	I0421 19:04:25.784514    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:04:25.827921    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:26.058956    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:04:26.093274    5552 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000 for IP: 172.27.203.42
	I0421 19:04:26.093274    5552 certs.go:194] generating shared ca certs ...
	I0421 19:04:26.093274    5552 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.104669    5552 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 19:04:26.123493    5552 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 19:04:26.123562    5552 certs.go:256] generating profile certs ...
	I0421 19:04:26.124144    5552 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key
	I0421 19:04:26.124144    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.crt with IP's: []
	I0421 19:04:26.304906    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.crt ...
	I0421 19:04:26.304906    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.crt: {Name:mk864221f165ddb5f2d013dba1047c26a1e5485c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.304906    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key ...
	I0421 19:04:26.304906    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key: {Name:mk413de5828b08b138b88cdfe9e6974631020fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.307461    5552 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.da17cb40
	I0421 19:04:26.307834    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.da17cb40 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.203.42 172.27.207.254]
	I0421 19:04:26.439620    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.da17cb40 ...
	I0421 19:04:26.439620    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.da17cb40: {Name:mk12ca28fdb0696dcf7324d3690bc3cd0fb51930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.440832    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.da17cb40 ...
	I0421 19:04:26.440832    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.da17cb40: {Name:mk4e0ce450f4a7e20327c5c3823871a125afc773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.441979    5552 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.da17cb40 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt
	I0421 19:04:26.452434    5552 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.da17cb40 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key
	I0421 19:04:26.454433    5552 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key
	I0421 19:04:26.454797    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt with IP's: []
	I0421 19:04:26.654061    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt ...
	I0421 19:04:26.654061    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt: {Name:mk36ab8a1f5776f6510e50d2f510085260e82b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.655385    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key ...
	I0421 19:04:26.655385    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key: {Name:mk4cb5d6ed1625767c437cba204364341fbcf0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.656674    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 19:04:26.656674    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 19:04:26.656674    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 19:04:26.657377    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 19:04:26.657377    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 19:04:26.657377    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 19:04:26.657972    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 19:04:26.666469    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 19:04:26.667472    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 19:04:26.675604    5552 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 19:04:26.675604    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 19:04:26.675604    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 19:04:26.676314    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 19:04:26.676534    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 19:04:26.677110    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 19:04:26.677567    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 19:04:26.677567    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:04:26.677567    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 19:04:26.679536    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:04:26.732440    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:04:26.781676    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:04:26.834091    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:04:26.886039    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 19:04:26.945983    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 19:04:26.989297    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:04:27.042879    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 19:04:27.092744    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 19:04:27.147945    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:04:27.199609    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 19:04:27.253541    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 19:04:27.312603    5552 ssh_runner.go:195] Run: openssl version
	I0421 19:04:27.339955    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 19:04:27.376654    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 19:04:27.384522    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 19:04:27.398003    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 19:04:27.422031    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:04:27.460021    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:04:27.497253    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:04:27.505943    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:04:27.521167    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:04:27.546633    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:04:27.582900    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 19:04:27.623889    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 19:04:27.630916    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 19:04:27.646961    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 19:04:27.673140    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 19:04:27.708831    5552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:04:27.715534    5552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 19:04:27.715918    5552 kubeadm.go:391] StartCluster: {Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:d
efault APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:04:27.726804    5552 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 19:04:27.763294    5552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 19:04:27.796784    5552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:04:27.830565    5552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:04:27.850821    5552 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:04:27.850890    5552 kubeadm.go:156] found existing configuration files:
	
	I0421 19:04:27.864368    5552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:04:27.886055    5552 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:04:27.901280    5552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:04:27.937988    5552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:04:27.958637    5552 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:04:27.973010    5552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:04:28.011309    5552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:04:28.041505    5552 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:04:28.056843    5552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:04:28.094003    5552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:04:28.115624    5552 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:04:28.131210    5552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:04:28.156280    5552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:04:28.475739    5552 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 19:04:28.475895    5552 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:04:28.685695    5552 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:04:28.685786    5552 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:04:28.686172    5552 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:04:29.035161    5552 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:04:29.039754    5552 out.go:204]   - Generating certificates and keys ...
	I0421 19:04:29.039954    5552 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:04:29.040173    5552 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:04:29.842647    5552 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 19:04:30.030494    5552 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 19:04:30.142205    5552 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 19:04:30.752084    5552 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 19:04:30.997008    5552 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 19:04:30.997008    5552 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-736000 localhost] and IPs [172.27.203.42 127.0.0.1 ::1]
	I0421 19:04:31.192128    5552 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 19:04:31.192689    5552 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-736000 localhost] and IPs [172.27.203.42 127.0.0.1 ::1]
	I0421 19:04:31.354373    5552 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 19:04:31.455055    5552 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 19:04:31.599614    5552 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 19:04:31.599614    5552 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:04:31.781223    5552 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:04:31.913360    5552 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 19:04:32.063695    5552 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:04:32.405612    5552 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:04:32.787755    5552 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:04:32.788754    5552 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:04:32.792935    5552 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:04:32.797405    5552 out.go:204]   - Booting up control plane ...
	I0421 19:04:32.797405    5552 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:04:32.798902    5552 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:04:32.800001    5552 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:04:32.822977    5552 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:04:32.822977    5552 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:04:32.823983    5552 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:04:33.054305    5552 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 19:04:33.054496    5552 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 19:04:34.056012    5552 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002214916s
	I0421 19:04:34.056611    5552 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 19:04:42.946409    5552 kubeadm.go:309] [api-check] The API server is healthy after 8.889517146s
	I0421 19:04:42.967861    5552 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 19:04:43.010739    5552 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 19:04:43.102095    5552 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 19:04:43.102633    5552 kubeadm.go:309] [mark-control-plane] Marking the node ha-736000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 19:04:43.122719    5552 kubeadm.go:309] [bootstrap-token] Using token: 7gx0zq.bjmn3uvg7raru7d7
	I0421 19:04:43.127348    5552 out.go:204]   - Configuring RBAC rules ...
	I0421 19:04:43.127738    5552 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 19:04:43.141987    5552 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 19:04:43.159935    5552 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 19:04:43.167506    5552 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 19:04:43.177890    5552 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 19:04:43.193066    5552 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 19:04:43.361823    5552 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 19:04:43.852415    5552 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 19:04:44.359027    5552 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 19:04:44.360563    5552 kubeadm.go:309] 
	I0421 19:04:44.360563    5552 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 19:04:44.360563    5552 kubeadm.go:309] 
	I0421 19:04:44.360563    5552 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 19:04:44.360563    5552 kubeadm.go:309] 
	I0421 19:04:44.360563    5552 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 19:04:44.360563    5552 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 19:04:44.361205    5552 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 19:04:44.361390    5552 kubeadm.go:309] 
	I0421 19:04:44.361486    5552 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 19:04:44.361623    5552 kubeadm.go:309] 
	I0421 19:04:44.361623    5552 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 19:04:44.361623    5552 kubeadm.go:309] 
	I0421 19:04:44.361623    5552 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 19:04:44.361623    5552 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 19:04:44.361623    5552 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 19:04:44.361623    5552 kubeadm.go:309] 
	I0421 19:04:44.362163    5552 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 19:04:44.362637    5552 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 19:04:44.362637    5552 kubeadm.go:309] 
	I0421 19:04:44.362637    5552 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7gx0zq.bjmn3uvg7raru7d7 \
	I0421 19:04:44.362637    5552 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 \
	I0421 19:04:44.363206    5552 kubeadm.go:309] 	--control-plane 
	I0421 19:04:44.363317    5552 kubeadm.go:309] 
	I0421 19:04:44.363614    5552 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 19:04:44.363675    5552 kubeadm.go:309] 
	I0421 19:04:44.363830    5552 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7gx0zq.bjmn3uvg7raru7d7 \
	I0421 19:04:44.363830    5552 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 
	I0421 19:04:44.364960    5552 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:04:44.365046    5552 cni.go:84] Creating CNI manager for ""
	I0421 19:04:44.365046    5552 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 19:04:44.367207    5552 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0421 19:04:44.384305    5552 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 19:04:44.392708    5552 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 19:04:44.392708    5552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0421 19:04:44.446584    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 19:04:45.103957    5552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:04:45.118640    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-736000 minikube.k8s.io/updated_at=2024_04_21T19_04_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-736000 minikube.k8s.io/primary=true
	I0421 19:04:45.119636    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:45.131295    5552 ops.go:34] apiserver oom_adj: -16
	I0421 19:04:45.436046    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:45.945670    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:46.436109    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:46.935669    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:47.438082    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:47.938686    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:48.441995    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:48.942248    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:49.445045    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:49.941613    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:50.440983    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:50.942109    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:51.448280    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:51.949487    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:52.433831    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:52.936277    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:53.439401    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:53.944252    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:54.447347    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:54.947896    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:55.436395    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:55.938358    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:56.438762    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:56.941961    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:57.447496    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:57.564796    5552 kubeadm.go:1107] duration metric: took 12.4607504s to wait for elevateKubeSystemPrivileges
	W0421 19:04:57.564925    5552 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 19:04:57.564925    5552 kubeadm.go:393] duration metric: took 29.8487959s to StartCluster
	I0421 19:04:57.565034    5552 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:57.565143    5552 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:04:57.566997    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:57.568305    5552 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:04:57.568305    5552 start.go:240] waiting for startup goroutines ...
	I0421 19:04:57.568305    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 19:04:57.568305    5552 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:04:57.568305    5552 addons.go:69] Setting storage-provisioner=true in profile "ha-736000"
	I0421 19:04:57.568305    5552 addons.go:234] Setting addon storage-provisioner=true in "ha-736000"
	I0421 19:04:57.568848    5552 addons.go:69] Setting default-storageclass=true in profile "ha-736000"
	I0421 19:04:57.568949    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:04:57.568991    5552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-736000"
	I0421 19:04:57.569282    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:04:57.569895    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:04:57.569895    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:04:57.785901    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 19:04:58.303516    5552 start.go:946] {"host.minikube.internal": 172.27.192.1} host record injected into CoreDNS's ConfigMap
	I0421 19:04:59.846430    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:04:59.846430    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:59.846430    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:04:59.846608    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:59.849412    5552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:04:59.847523    5552 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:04:59.851925    5552 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:04:59.851925    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:04:59.851925    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:04:59.852569    5552 kapi.go:59] client config for ha-736000: &rest.Config{Host:"https://172.27.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 19:04:59.853826    5552 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 19:04:59.853826    5552 addons.go:234] Setting addon default-storageclass=true in "ha-736000"
	I0421 19:04:59.854406    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:04:59.855260    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:05:02.137208    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:02.137208    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:02.137208    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:05:02.272867    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:02.272867    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:02.273304    5552 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:05:02.273384    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:05:02.273442    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:05:04.567897    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:04.568903    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:04.568965    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:05:04.952740    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:05:04.953782    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:04.954443    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:05:05.104670    5552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:05:07.291443    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:05:07.292008    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:07.292336    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:05:07.429614    5552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:05:07.626693    5552 round_trippers.go:463] GET https://172.27.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0421 19:05:07.626777    5552 round_trippers.go:469] Request Headers:
	I0421 19:05:07.626777    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:05:07.626777    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:05:07.640532    5552 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 19:05:07.642341    5552 round_trippers.go:463] PUT https://172.27.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0421 19:05:07.642410    5552 round_trippers.go:469] Request Headers:
	I0421 19:05:07.642410    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:05:07.642410    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:05:07.642410    5552 round_trippers.go:473]     Content-Type: application/json
	I0421 19:05:07.652563    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:05:07.658203    5552 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0421 19:05:07.660696    5552 addons.go:505] duration metric: took 10.0923197s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0421 19:05:07.660872    5552 start.go:245] waiting for cluster config update ...
	I0421 19:05:07.660872    5552 start.go:254] writing updated cluster config ...
	I0421 19:05:07.668354    5552 out.go:177] 
	I0421 19:05:07.675699    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:05:07.675699    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:05:07.681640    5552 out.go:177] * Starting "ha-736000-m02" control-plane node in "ha-736000" cluster
	I0421 19:05:07.685843    5552 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 19:05:07.686020    5552 cache.go:56] Caching tarball of preloaded images
	I0421 19:05:07.686125    5552 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 19:05:07.686125    5552 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 19:05:07.686827    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:05:07.692430    5552 start.go:360] acquireMachinesLock for ha-736000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:05:07.693007    5552 start.go:364] duration metric: took 576.6µs to acquireMachinesLock for "ha-736000-m02"
	I0421 19:05:07.693227    5552 start.go:93] Provisioning new machine with config: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName
:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:05:07.693512    5552 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0421 19:05:07.700324    5552 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 19:05:07.700734    5552 start.go:159] libmachine.API.Create for "ha-736000" (driver="hyperv")
	I0421 19:05:07.700794    5552 client.go:168] LocalClient.Create starting
	I0421 19:05:07.700944    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 19:05:07.701606    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:05:07.701606    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:05:07.701795    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 19:05:07.701795    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:05:07.702108    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:05:07.702336    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 19:05:09.713653    5552 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 19:05:09.713653    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:09.714391    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 19:05:11.563036    5552 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 19:05:11.563036    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:11.563280    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:05:13.141366    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:05:13.141366    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:13.142386    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:05:16.967624    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:05:16.968587    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:16.971314    5552 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 19:05:17.532703    5552 main.go:141] libmachine: Creating SSH key...
	I0421 19:05:17.749009    5552 main.go:141] libmachine: Creating VM...
	I0421 19:05:17.749009    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:05:20.796382    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:05:20.796382    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:20.796382    5552 main.go:141] libmachine: Using switch "Default Switch"
	I0421 19:05:20.796382    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:05:22.674093    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:05:22.674093    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:22.674188    5552 main.go:141] libmachine: Creating VHD
	I0421 19:05:22.674188    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 19:05:26.482813    5552 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5F01B524-1FF1-472D-8B06-C8BC95607249
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 19:05:26.482883    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:26.482883    5552 main.go:141] libmachine: Writing magic tar header
	I0421 19:05:26.482883    5552 main.go:141] libmachine: Writing SSH key tar header
	I0421 19:05:26.492850    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 19:05:29.724475    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:29.724999    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:29.724999    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\disk.vhd' -SizeBytes 20000MB
	I0421 19:05:32.306610    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:32.306956    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:32.307065    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-736000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0421 19:05:36.065155    5552 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-736000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 19:05:36.065155    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:36.065304    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-736000-m02 -DynamicMemoryEnabled $false
	I0421 19:05:38.380352    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:38.381339    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:38.381413    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-736000-m02 -Count 2
	I0421 19:05:40.595994    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:40.595994    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:40.596107    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-736000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\boot2docker.iso'
	I0421 19:05:43.242471    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:43.242471    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:43.243672    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-736000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\disk.vhd'
	I0421 19:05:46.003739    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:46.003739    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:46.003739    5552 main.go:141] libmachine: Starting VM...
	I0421 19:05:46.003739    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-736000-m02
	I0421 19:05:49.147947    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:49.148749    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:49.148749    5552 main.go:141] libmachine: Waiting for host to start...
	I0421 19:05:49.148855    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:05:51.414836    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:51.414836    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:51.414836    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:05:54.005405    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:54.005991    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:55.013311    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:05:57.215410    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:57.215644    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:57.215644    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:05:59.817941    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:59.818198    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:00.831587    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:03.065763    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:03.065763    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:03.065763    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:05.648098    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:06:05.648098    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:06.658381    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:08.881166    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:08.881166    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:08.881994    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:11.452229    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:06:11.452229    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:12.459363    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:14.705118    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:14.706109    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:14.706109    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:17.387537    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:17.387619    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:17.387619    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:19.581367    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:19.581688    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:19.581688    5552 machine.go:94] provisionDockerMachine start ...
	I0421 19:06:19.581883    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:21.786074    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:21.786615    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:21.786718    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:24.448538    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:24.448538    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:24.455528    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:24.455528    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:24.455528    5552 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:06:24.592817    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:06:24.592880    5552 buildroot.go:166] provisioning hostname "ha-736000-m02"
	I0421 19:06:24.592880    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:26.796991    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:26.796991    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:26.797338    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:29.483085    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:29.483085    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:29.490249    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:29.490316    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:29.490316    5552 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-736000-m02 && echo "ha-736000-m02" | sudo tee /etc/hostname
	I0421 19:06:29.650175    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-736000-m02
	
	I0421 19:06:29.650236    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:31.777015    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:31.777015    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:31.778063    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:34.386248    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:34.386248    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:34.392798    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:34.393530    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:34.393530    5552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-736000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-736000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-736000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:06:34.537154    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:06:34.537154    5552 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 19:06:34.537154    5552 buildroot.go:174] setting up certificates
	I0421 19:06:34.537154    5552 provision.go:84] configureAuth start
	I0421 19:06:34.537688    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:36.710366    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:36.710989    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:36.711049    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:39.342276    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:39.342543    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:39.342543    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:41.500093    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:41.500093    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:41.500338    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:44.108611    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:44.108675    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:44.108675    5552 provision.go:143] copyHostCerts
	I0421 19:06:44.108828    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 19:06:44.109340    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 19:06:44.109433    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 19:06:44.109944    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 19:06:44.111156    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 19:06:44.111156    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 19:06:44.111156    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 19:06:44.111945    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 19:06:44.113135    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 19:06:44.113481    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 19:06:44.113481    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 19:06:44.114064    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 19:06:44.115030    5552 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-736000-m02 san=[127.0.0.1 172.27.196.39 ha-736000-m02 localhost minikube]
	I0421 19:06:44.723267    5552 provision.go:177] copyRemoteCerts
	I0421 19:06:44.737676    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:06:44.737676    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:46.919584    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:46.919776    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:46.919854    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:49.497523    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:49.497523    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:49.498451    5552 sshutil.go:53] new ssh client: &{IP:172.27.196.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\id_rsa Username:docker}
	I0421 19:06:49.604625    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8669139s)
	I0421 19:06:49.604625    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 19:06:49.605647    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:06:49.656825    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 19:06:49.657362    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 19:06:49.705743    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 19:06:49.706254    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 19:06:49.759260    5552 provision.go:87] duration metric: took 15.2219978s to configureAuth
	I0421 19:06:49.759260    5552 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:06:49.760262    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:06:49.760262    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:51.944997    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:51.946153    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:51.946153    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:54.558952    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:54.559360    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:54.569702    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:54.569702    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:54.569702    5552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 19:06:54.702604    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 19:06:54.702712    5552 buildroot.go:70] root file system type: tmpfs
	I0421 19:06:54.703189    5552 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 19:06:54.703244    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:56.874158    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:56.875197    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:56.875231    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:59.477857    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:59.478134    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:59.484057    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:59.484465    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:59.484465    5552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.203.42"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 19:06:59.640807    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.203.42
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 19:06:59.640923    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:01.738300    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:01.738377    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:01.738471    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:04.327283    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:04.327747    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:04.335013    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:07:04.335147    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:07:04.335147    5552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 19:07:06.638246    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 19:07:06.638246    5552 machine.go:97] duration metric: took 47.0562236s to provisionDockerMachine
	I0421 19:07:06.638246    5552 client.go:171] duration metric: took 1m58.9366079s to LocalClient.Create
	I0421 19:07:06.638246    5552 start.go:167] duration metric: took 1m58.9366678s to libmachine.API.Create "ha-736000"
	I0421 19:07:06.638246    5552 start.go:293] postStartSetup for "ha-736000-m02" (driver="hyperv")
	I0421 19:07:06.638246    5552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:07:06.652103    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:07:06.652103    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:08.815691    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:08.815691    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:08.816547    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:11.433445    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:11.433445    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:11.434555    5552 sshutil.go:53] new ssh client: &{IP:172.27.196.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\id_rsa Username:docker}
	I0421 19:07:11.563623    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9114852s)
	I0421 19:07:11.578158    5552 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:07:11.587695    5552 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:07:11.587762    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 19:07:11.587817    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 19:07:11.588591    5552 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 19:07:11.588591    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 19:07:11.603708    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:07:11.622715    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 19:07:11.673716    5552 start.go:296] duration metric: took 5.0354338s for postStartSetup
	I0421 19:07:11.676704    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:13.848297    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:13.848297    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:13.848297    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:16.502813    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:16.503493    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:16.503493    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:07:16.506435    5552 start.go:128] duration metric: took 2m8.8116112s to createHost
	I0421 19:07:16.506569    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:18.673150    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:18.673150    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:18.673150    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:21.315039    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:21.315039    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:21.322073    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:07:21.322577    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:07:21.322652    5552 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:07:21.448006    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713726441.448751925
	
	I0421 19:07:21.448060    5552 fix.go:216] guest clock: 1713726441.448751925
	I0421 19:07:21.448060    5552 fix.go:229] Guest: 2024-04-21 19:07:21.448751925 +0000 UTC Remote: 2024-04-21 19:07:16.5065063 +0000 UTC m=+345.927139301 (delta=4.942245625s)
	I0421 19:07:21.448217    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:23.604764    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:23.604822    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:23.604822    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:26.269352    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:26.269352    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:26.277538    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:07:26.277937    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:07:26.278031    5552 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713726441
	I0421 19:07:26.423452    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 19:07:21 UTC 2024
	
	I0421 19:07:26.423452    5552 fix.go:236] clock set: Sun Apr 21 19:07:21 UTC 2024
	 (err=<nil>)
	I0421 19:07:26.423452    5552 start.go:83] releasing machines lock for "ha-736000-m02", held for 2m18.7294005s
	I0421 19:07:26.424006    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:28.659593    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:28.659593    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:28.659593    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:31.307840    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:31.308137    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:31.311626    5552 out.go:177] * Found network options:
	I0421 19:07:31.314485    5552 out.go:177]   - NO_PROXY=172.27.203.42
	W0421 19:07:31.316813    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 19:07:31.318181    5552 out.go:177]   - NO_PROXY=172.27.203.42
	W0421 19:07:31.321514    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:07:31.322797    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 19:07:31.326109    5552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:07:31.326254    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:31.335806    5552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 19:07:31.336818    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:33.558045    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:36.334163    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:36.334163    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:36.334163    5552 sshutil.go:53] new ssh client: &{IP:172.27.196.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\id_rsa Username:docker}
	I0421 19:07:36.361767    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:36.361767    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:36.362299    5552 sshutil.go:53] new ssh client: &{IP:172.27.196.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\id_rsa Username:docker}
	I0421 19:07:36.436278    5552 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1004359s)
	W0421 19:07:36.436278    5552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:07:36.452324    5552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:07:36.588566    5552 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.26236s)
	I0421 19:07:36.588566    5552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:07:36.588566    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:07:36.588566    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:07:36.646964    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 19:07:36.683642    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 19:07:36.707446    5552 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 19:07:36.722523    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 19:07:36.759314    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:07:36.795869    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 19:07:36.838559    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:07:36.874930    5552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:07:36.907895    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 19:07:36.939993    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 19:07:36.976624    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 19:07:37.016479    5552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:07:37.050022    5552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:07:37.083472    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:37.316947    5552 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 19:07:37.355903    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:07:37.370588    5552 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 19:07:37.419600    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:07:37.462311    5552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:07:37.510241    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:07:37.552195    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:07:37.594916    5552 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 19:07:37.677804    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:07:37.710754    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:07:37.765570    5552 ssh_runner.go:195] Run: which cri-dockerd
	I0421 19:07:37.785643    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 19:07:37.809359    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 19:07:37.862713    5552 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 19:07:38.095142    5552 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 19:07:38.308663    5552 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 19:07:38.308787    5552 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 19:07:38.360643    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:38.576226    5552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 19:07:41.149289    5552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5729449s)
	I0421 19:07:41.162923    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 19:07:41.204134    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:07:41.247069    5552 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 19:07:41.474152    5552 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 19:07:41.700145    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:41.938709    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 19:07:41.994817    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:07:42.039196    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:42.274676    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 19:07:42.394442    5552 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 19:07:42.408552    5552 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 19:07:42.418505    5552 start.go:562] Will wait 60s for crictl version
	I0421 19:07:42.431175    5552 ssh_runner.go:195] Run: which crictl
	I0421 19:07:42.455227    5552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:07:42.521470    5552 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 19:07:42.531089    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:07:42.584978    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:07:42.625863    5552 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 19:07:42.629432    5552 out.go:177]   - env NO_PROXY=172.27.203.42
	I0421 19:07:42.632347    5552 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 19:07:42.637683    5552 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 19:07:42.637828    5552 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 19:07:42.637828    5552 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 19:07:42.637828    5552 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 19:07:42.640156    5552 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 19:07:42.640156    5552 ip.go:210] interface addr: 172.27.192.1/20
	I0421 19:07:42.654525    5552 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 19:07:42.662580    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:07:42.691821    5552 mustload.go:65] Loading cluster: ha-736000
	I0421 19:07:42.692926    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:07:42.693879    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:07:44.821210    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:44.821334    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:44.821334    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:07:44.822096    5552 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000 for IP: 172.27.196.39
	I0421 19:07:44.822158    5552 certs.go:194] generating shared ca certs ...
	I0421 19:07:44.822158    5552 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:07:44.822742    5552 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 19:07:44.823056    5552 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 19:07:44.823568    5552 certs.go:256] generating profile certs ...
	I0421 19:07:44.823649    5552 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key
	I0421 19:07:44.824229    5552 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.fae335c4
	I0421 19:07:44.824395    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.fae335c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.203.42 172.27.196.39 172.27.207.254]
	I0421 19:07:44.941624    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.fae335c4 ...
	I0421 19:07:44.942650    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.fae335c4: {Name:mkdf65cadb4d3eb2882aecf91b5b8bc56bf5ae8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:07:44.943986    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.fae335c4 ...
	I0421 19:07:44.943986    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.fae335c4: {Name:mk34d9f61d951b75fdc47c93983e3d4605d204e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:07:44.945206    5552 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.fae335c4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt
	I0421 19:07:44.958259    5552 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.fae335c4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key
	I0421 19:07:44.959574    5552 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key
	I0421 19:07:44.959574    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 19:07:44.960322    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 19:07:44.960488    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 19:07:44.960488    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 19:07:44.960488    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 19:07:44.960488    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 19:07:44.961063    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 19:07:44.961063    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 19:07:44.962262    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 19:07:44.962610    5552 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 19:07:44.962610    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 19:07:44.963146    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 19:07:44.963440    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 19:07:44.963440    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 19:07:44.964257    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 19:07:44.964792    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:07:44.965158    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 19:07:44.965445    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 19:07:44.965445    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:07:47.162508    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:47.163166    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:47.163363    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:49.801714    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:07:49.801714    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:49.803052    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:07:49.906294    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0421 19:07:49.917885    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0421 19:07:49.957383    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0421 19:07:49.965269    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0421 19:07:50.001820    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0421 19:07:50.010913    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0421 19:07:50.056768    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0421 19:07:50.066792    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0421 19:07:50.102645    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0421 19:07:50.111428    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0421 19:07:50.150080    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0421 19:07:50.158182    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0421 19:07:50.183985    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:07:50.243374    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:07:50.300927    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:07:50.354607    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:07:50.408821    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0421 19:07:50.460181    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 19:07:50.513660    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:07:50.565851    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 19:07:50.615176    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:07:50.665709    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 19:07:50.718432    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 19:07:50.786129    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0421 19:07:50.828055    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0421 19:07:50.865135    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0421 19:07:50.902570    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0421 19:07:50.936462    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0421 19:07:50.974347    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0421 19:07:51.010821    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0421 19:07:51.060232    5552 ssh_runner.go:195] Run: openssl version
	I0421 19:07:51.082771    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 19:07:51.115933    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 19:07:51.127269    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 19:07:51.141186    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 19:07:51.165620    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:07:51.199580    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:07:51.234434    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:07:51.242500    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:07:51.257489    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:07:51.282579    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:07:51.319272    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 19:07:51.355181    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 19:07:51.362677    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 19:07:51.376296    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 19:07:51.402030    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 19:07:51.438595    5552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:07:51.445422    5552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 19:07:51.445724    5552 kubeadm.go:928] updating node {m02 172.27.196.39 8443 v1.30.0 docker true true} ...
	I0421 19:07:51.445898    5552 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-736000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.196.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:07:51.445997    5552 kube-vip.go:111] generating kube-vip config ...
	I0421 19:07:51.459436    5552 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 19:07:51.485844    5552 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 19:07:51.485844    5552 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 19:07:51.499800    5552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 19:07:51.517753    5552 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 19:07:51.530708    5552 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 19:07:51.556041    5552 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0421 19:07:51.556041    5552 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0421 19:07:51.556041    5552 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0421 19:07:52.645563    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 19:07:52.658139    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 19:07:52.666388    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 19:07:52.666388    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 19:07:53.905135    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 19:07:53.918503    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 19:07:53.931495    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 19:07:53.931495    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 19:07:55.911839    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:07:55.956747    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 19:07:55.970368    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 19:07:55.977699    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 19:07:55.977699    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 19:07:56.548846    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0421 19:07:56.569746    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0421 19:07:56.605266    5552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:07:56.643646    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 19:07:56.693226    5552 ssh_runner.go:195] Run: grep 172.27.207.254	control-plane.minikube.internal$ /etc/hosts
	I0421 19:07:56.699513    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:07:56.738870    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:56.969031    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:07:57.001379    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:07:57.002238    5552 start.go:316] joinCluster: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:def
ault APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:07:57.002238    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 19:07:57.002238    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:07:59.191739    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:59.191790    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:59.191878    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:08:01.798434    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:08:01.798527    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:08:01.799273    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:08:02.026636    5552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0243628s)
	I0421 19:08:02.026783    5552 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:08:02.026783    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7v4r3f.vutef5no8emo2dip --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-736000-m02 --control-plane --apiserver-advertise-address=172.27.196.39 --apiserver-bind-port=8443"
	I0421 19:08:48.045692    5552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7v4r3f.vutef5no8emo2dip --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-736000-m02 --control-plane --apiserver-advertise-address=172.27.196.39 --apiserver-bind-port=8443": (46.0185824s)
	I0421 19:08:48.045692    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 19:08:48.978772    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-736000-m02 minikube.k8s.io/updated_at=2024_04_21T19_08_48_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-736000 minikube.k8s.io/primary=false
	I0421 19:08:49.166196    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-736000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0421 19:08:49.366213    5552 start.go:318] duration metric: took 52.3636038s to joinCluster
	I0421 19:08:49.366213    5552 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:08:49.368800    5552 out.go:177] * Verifying Kubernetes components...
	I0421 19:08:49.367155    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:08:49.385799    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:08:49.789078    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:08:49.827233    5552 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:08:49.827679    5552 kapi.go:59] client config for ha-736000: &rest.Config{Host:"https://172.27.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0421 19:08:49.827679    5552 kubeadm.go:477] Overriding stale ClientConfig host https://172.27.207.254:8443 with https://172.27.203.42:8443
	I0421 19:08:49.829140    5552 node_ready.go:35] waiting up to 6m0s for node "ha-736000-m02" to be "Ready" ...
	I0421 19:08:49.829229    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:49.829229    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:49.829229    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:49.829229    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:50.031957    5552 round_trippers.go:574] Response Status: 200 OK in 202 milliseconds
	I0421 19:08:50.342143    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:50.342143    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:50.342346    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:50.342346    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:50.349814    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:08:50.831519    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:50.831519    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:50.831519    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:50.831519    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:50.846138    5552 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0421 19:08:51.339997    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:51.340068    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:51.340068    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:51.340191    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:51.352481    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:08:51.835457    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:51.835482    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:51.835482    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:51.835547    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:51.919707    5552 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0421 19:08:51.920478    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:08:52.329986    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:52.329986    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:52.329986    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:52.329986    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:52.339183    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:08:52.835745    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:52.835937    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:52.835937    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:52.835937    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:52.842273    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:53.341644    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:53.341644    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:53.341644    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:53.341644    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:53.348272    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:53.830823    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:53.830823    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:53.830927    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:53.830927    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:53.836159    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:08:54.337773    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:54.337773    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:54.337844    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:54.337844    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:54.348038    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:08:54.348038    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:08:54.831725    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:54.831789    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:54.831789    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:54.831789    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:54.838348    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:55.339083    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:55.339152    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:55.339152    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:55.339152    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:55.346007    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:55.843105    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:55.843189    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:55.843189    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:55.843189    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:55.847772    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:08:56.336772    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:56.336772    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:56.336772    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:56.336772    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:56.367727    5552 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0421 19:08:56.368591    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:08:56.830154    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:56.830154    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:56.830154    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:56.830154    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:56.836127    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:08:57.334411    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:57.334411    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:57.334411    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:57.334411    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:57.340100    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:08:57.836385    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:57.836446    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:57.836446    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:57.836446    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:57.840051    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:08:58.339679    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:58.339679    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:58.339679    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:58.339679    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:58.345693    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:58.841742    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:58.841742    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:58.841742    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:58.841742    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:58.846369    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:08:58.847923    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:08:59.329920    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:59.329920    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:59.329920    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:59.329920    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:59.336052    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:59.832521    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:59.832521    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:59.832521    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:59.832521    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:59.842320    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:09:00.340634    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:00.340634    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:00.340634    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:00.340634    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:00.350330    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:09:00.844105    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:00.844105    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:00.844193    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:00.844193    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:00.850035    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:00.851128    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:09:01.342610    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:01.342610    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:01.342610    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:01.342610    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:01.350347    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:09:01.845385    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:01.845385    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:01.845385    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:01.845385    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:01.850976    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:02.344082    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:02.344351    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.344351    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.344351    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.350625    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:09:02.833408    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:02.833408    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.833408    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.833408    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.839273    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:02.840475    5552 node_ready.go:49] node "ha-736000-m02" has status "Ready":"True"
	I0421 19:09:02.840475    5552 node_ready.go:38] duration metric: took 13.0112424s for node "ha-736000-m02" to be "Ready" ...
	I0421 19:09:02.840475    5552 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:09:02.840475    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:02.840475    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.840475    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.840475    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.848775    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:09:02.859503    5552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.859503    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bp9zb
	I0421 19:09:02.859503    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.859503    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.859503    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.864623    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:02.865304    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:02.865304    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.865304    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.865304    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.870780    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:02.872197    5552 pod_ready.go:92] pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:02.872197    5552 pod_ready.go:81] duration metric: took 12.6937ms for pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.872197    5552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.872197    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kv8pq
	I0421 19:09:02.872197    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.872197    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.872197    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.876797    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:02.877728    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:02.877728    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.877728    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.877728    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.881532    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:09:02.883322    5552 pod_ready.go:92] pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:02.883322    5552 pod_ready.go:81] duration metric: took 11.125ms for pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.883322    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.883487    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000
	I0421 19:09:02.883525    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.883525    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.883525    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.887259    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:09:02.887693    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:02.887693    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.887693    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.887693    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.891290    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:09:02.892353    5552 pod_ready.go:92] pod "etcd-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:02.892353    5552 pod_ready.go:81] duration metric: took 9.0314ms for pod "etcd-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.892353    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.892353    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m02
	I0421 19:09:02.892353    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.892353    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.892353    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.896947    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:02.897786    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:02.897840    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.897840    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.897840    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.901695    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:09:02.901695    5552 pod_ready.go:92] pod "etcd-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:02.902252    5552 pod_ready.go:81] duration metric: took 9.8989ms for pod "etcd-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.902252    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:03.035609    5552 request.go:629] Waited for 133.1012ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000
	I0421 19:09:03.035730    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000
	I0421 19:09:03.035730    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.035730    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.035730    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.044977    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:09:03.238175    5552 request.go:629] Waited for 192.1819ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:03.238246    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:03.238363    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.238363    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.238363    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.249757    5552 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 19:09:03.252372    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:03.252450    5552 pod_ready.go:81] duration metric: took 350.1952ms for pod "kube-apiserver-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:03.252450    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:03.439349    5552 request.go:629] Waited for 186.7242ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:03.439406    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:03.439406    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.439406    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.439406    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.445042    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:03.644939    5552 request.go:629] Waited for 198.5088ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:03.645070    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:03.645070    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.645144    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.645144    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.656131    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:09:03.834311    5552 request.go:629] Waited for 79.0088ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:03.834389    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:03.834517    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.834551    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.834551    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.843765    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:09:04.038887    5552 request.go:629] Waited for 194.0672ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:04.038887    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:04.039024    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.039024    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.039024    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.047797    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:09:04.260861    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:04.260861    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.260861    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.260861    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.268120    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:09:04.446480    5552 request.go:629] Waited for 177.5413ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:04.446985    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:04.447145    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.447145    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.447145    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.452758    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:04.454964    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:04.455055    5552 pod_ready.go:81] duration metric: took 1.2025962s for pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:04.455055    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:04.634271    5552 request.go:629] Waited for 178.9704ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000
	I0421 19:09:04.634435    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000
	I0421 19:09:04.634435    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.634435    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.634435    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.640100    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:04.837422    5552 request.go:629] Waited for 195.1589ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:04.837517    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:04.837517    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.837517    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.837586    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.844244    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:09:04.844509    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:04.844509    5552 pod_ready.go:81] duration metric: took 389.4512ms for pod "kube-controller-manager-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:04.844509    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.040959    5552 request.go:629] Waited for 195.707ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m02
	I0421 19:09:05.041046    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m02
	I0421 19:09:05.041142    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.041142    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.041142    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.046912    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:05.243266    5552 request.go:629] Waited for 194.6967ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:05.243630    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:05.243693    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.243693    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.243785    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.251178    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:09:05.251688    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:05.251688    5552 pod_ready.go:81] duration metric: took 407.1763ms for pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.251688    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqs5h" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.445374    5552 request.go:629] Waited for 193.4376ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqs5h
	I0421 19:09:05.445550    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqs5h
	I0421 19:09:05.445550    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.445550    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.445550    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.451123    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:05.634732    5552 request.go:629] Waited for 181.4822ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:05.634976    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:05.634976    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.634976    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.634976    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.642786    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:09:05.643544    5552 pod_ready.go:92] pod "kube-proxy-pqs5h" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:05.643544    5552 pod_ready.go:81] duration metric: took 391.8532ms for pod "kube-proxy-pqs5h" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.643544    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tj6tp" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.840725    5552 request.go:629] Waited for 196.5598ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tj6tp
	I0421 19:09:05.840725    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tj6tp
	I0421 19:09:05.840725    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.840725    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.840725    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.847322    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:09:06.046061    5552 request.go:629] Waited for 196.7965ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:06.046218    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:06.046218    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.046218    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.046218    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.059009    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:09:06.060105    5552 pod_ready.go:92] pod "kube-proxy-tj6tp" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:06.060105    5552 pod_ready.go:81] duration metric: took 416.558ms for pod "kube-proxy-tj6tp" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.060105    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.248192    5552 request.go:629] Waited for 187.6363ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000
	I0421 19:09:06.248379    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000
	I0421 19:09:06.248379    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.248379    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.248379    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.253924    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:06.437246    5552 request.go:629] Waited for 182.1142ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:06.437488    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:06.437593    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.437593    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.437593    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.444408    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:09:06.445160    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:06.445160    5552 pod_ready.go:81] duration metric: took 385.0526ms for pod "kube-scheduler-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.445160    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.640457    5552 request.go:629] Waited for 195.1153ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m02
	I0421 19:09:06.640570    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m02
	I0421 19:09:06.640694    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.640694    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.640694    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.645059    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:06.845581    5552 request.go:629] Waited for 199.3304ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:06.846013    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:06.846013    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.846013    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.846013    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.851818    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:06.853287    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:06.853481    5552 pod_ready.go:81] duration metric: took 408.3183ms for pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.853481    5552 pod_ready.go:38] duration metric: took 4.0129783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:09:06.853560    5552 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:09:06.867247    5552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:09:06.900431    5552 api_server.go:72] duration metric: took 17.5340933s to wait for apiserver process to appear ...
	I0421 19:09:06.900431    5552 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:09:06.900431    5552 api_server.go:253] Checking apiserver healthz at https://172.27.203.42:8443/healthz ...
	I0421 19:09:06.914814    5552 api_server.go:279] https://172.27.203.42:8443/healthz returned 200:
	ok
	I0421 19:09:06.914942    5552 round_trippers.go:463] GET https://172.27.203.42:8443/version
	I0421 19:09:06.915055    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.915055    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.915055    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.916125    5552 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 19:09:06.916775    5552 api_server.go:141] control plane version: v1.30.0
	I0421 19:09:06.916775    5552 api_server.go:131] duration metric: took 16.3439ms to wait for apiserver health ...
	I0421 19:09:06.916775    5552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:09:07.034319    5552 request.go:629] Waited for 116.6747ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:07.034523    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:07.034523    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:07.034523    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:07.034523    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:07.045464    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:09:07.052980    5552 system_pods.go:59] 17 kube-system pods found
	I0421 19:09:07.052980    5552 system_pods.go:61] "coredns-7db6d8ff4d-bp9zb" [7da3275b-72ab-4de1-b829-77f0cd23dc23] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "coredns-7db6d8ff4d-kv8pq" [d4be5bff-5bb6-4ddf-8eff-ccc817c9ad36] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "etcd-ha-736000" [06a6dbb9-b777-4098-aa92-6df6049b0503] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "etcd-ha-736000-m02" [6c50f3fb-a82f-43ff-80f7-43d070388780] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kindnet-7j6mw" [830e1e63-992c-4e19-9aff-4d0f3be10d13] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kindnet-wwkr9" [29feeeec-236b-4fe2-bd95-2df787f4851a] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-apiserver-ha-736000" [6355d12a-d611-44f5-9bfe-725d2ee23d0d] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-apiserver-ha-736000-m02" [4531a2aa-6aff-4d31-a1b4-55837f998ba0] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-controller-manager-ha-736000" [622072f3-a063-476a-a7b2-45f44ebb0d06] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-controller-manager-ha-736000-m02" [24d7539e-0271-4f8a-8fb6-d8a4e19eacb1] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-proxy-pqs5h" [45dbccc2-56ec-4ef9-9daf-1b47ebf39c42] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-proxy-tj6tp" [47ae652e-af60-4a64-a218-3d2fadfe7b0f] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-scheduler-ha-736000" [3980d4e4-3d76-4b51-9da9-9d7b270b6f26] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-scheduler-ha-736000-m02" [4b8301ab-82ac-4660-9f54-41c8ccd7149f] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-vip-ha-736000" [10075b63-2468-4380-9528-ad61710a05ec] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-vip-ha-736000-m02" [0577cb35-8275-4c27-a8eb-8176c4e198bb] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "storage-provisioner" [f51763ff-64ac-4583-86bf-32ec3dfcb438] Running
	I0421 19:09:07.052980    5552 system_pods.go:74] duration metric: took 135.4772ms to wait for pod list to return data ...
	I0421 19:09:07.052980    5552 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:09:07.235401    5552 request.go:629] Waited for 181.5265ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/default/serviceaccounts
	I0421 19:09:07.235401    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/default/serviceaccounts
	I0421 19:09:07.235401    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:07.235401    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:07.235401    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:07.240026    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:07.241670    5552 default_sa.go:45] found service account: "default"
	I0421 19:09:07.241778    5552 default_sa.go:55] duration metric: took 188.6887ms for default service account to be created ...
	I0421 19:09:07.241778    5552 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:09:07.437016    5552 request.go:629] Waited for 194.9712ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:07.437154    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:07.437154    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:07.437154    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:07.437154    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:07.445479    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:09:07.457581    5552 system_pods.go:86] 17 kube-system pods found
	I0421 19:09:07.457581    5552 system_pods.go:89] "coredns-7db6d8ff4d-bp9zb" [7da3275b-72ab-4de1-b829-77f0cd23dc23] Running
	I0421 19:09:07.457581    5552 system_pods.go:89] "coredns-7db6d8ff4d-kv8pq" [d4be5bff-5bb6-4ddf-8eff-ccc817c9ad36] Running
	I0421 19:09:07.457581    5552 system_pods.go:89] "etcd-ha-736000" [06a6dbb9-b777-4098-aa92-6df6049b0503] Running
	I0421 19:09:07.457581    5552 system_pods.go:89] "etcd-ha-736000-m02" [6c50f3fb-a82f-43ff-80f7-43d070388780] Running
	I0421 19:09:07.458147    5552 system_pods.go:89] "kindnet-7j6mw" [830e1e63-992c-4e19-9aff-4d0f3be10d13] Running
	I0421 19:09:07.458147    5552 system_pods.go:89] "kindnet-wwkr9" [29feeeec-236b-4fe2-bd95-2df787f4851a] Running
	I0421 19:09:07.458147    5552 system_pods.go:89] "kube-apiserver-ha-736000" [6355d12a-d611-44f5-9bfe-725d2ee23d0d] Running
	I0421 19:09:07.458147    5552 system_pods.go:89] "kube-apiserver-ha-736000-m02" [4531a2aa-6aff-4d31-a1b4-55837f998ba0] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-controller-manager-ha-736000" [622072f3-a063-476a-a7b2-45f44ebb0d06] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-controller-manager-ha-736000-m02" [24d7539e-0271-4f8a-8fb6-d8a4e19eacb1] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-proxy-pqs5h" [45dbccc2-56ec-4ef9-9daf-1b47ebf39c42] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-proxy-tj6tp" [47ae652e-af60-4a64-a218-3d2fadfe7b0f] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-scheduler-ha-736000" [3980d4e4-3d76-4b51-9da9-9d7b270b6f26] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-scheduler-ha-736000-m02" [4b8301ab-82ac-4660-9f54-41c8ccd7149f] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-vip-ha-736000" [10075b63-2468-4380-9528-ad61710a05ec] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-vip-ha-736000-m02" [0577cb35-8275-4c27-a8eb-8176c4e198bb] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "storage-provisioner" [f51763ff-64ac-4583-86bf-32ec3dfcb438] Running
	I0421 19:09:07.458191    5552 system_pods.go:126] duration metric: took 216.4118ms to wait for k8s-apps to be running ...
	I0421 19:09:07.458191    5552 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:09:07.469788    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:09:07.499346    5552 system_svc.go:56] duration metric: took 41.1551ms WaitForService to wait for kubelet
	I0421 19:09:07.499456    5552 kubeadm.go:576] duration metric: took 18.1331143s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:09:07.499456    5552 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:09:07.640605    5552 request.go:629] Waited for 140.798ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes
	I0421 19:09:07.640837    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes
	I0421 19:09:07.640978    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:07.640978    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:07.640978    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:07.646683    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:07.648783    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:09:07.648843    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:09:07.648900    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:09:07.648900    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:09:07.648951    5552 node_conditions.go:105] duration metric: took 149.3519ms to run NodePressure ...
	I0421 19:09:07.648967    5552 start.go:240] waiting for startup goroutines ...
	I0421 19:09:07.649022    5552 start.go:254] writing updated cluster config ...
	I0421 19:09:07.654683    5552 out.go:177] 
	I0421 19:09:07.663488    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:09:07.663488    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:09:07.670471    5552 out.go:177] * Starting "ha-736000-m03" control-plane node in "ha-736000" cluster
	I0421 19:09:07.675220    5552 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 19:09:07.675789    5552 cache.go:56] Caching tarball of preloaded images
	I0421 19:09:07.676487    5552 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 19:09:07.676612    5552 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 19:09:07.676924    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:09:07.682467    5552 start.go:360] acquireMachinesLock for ha-736000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:09:07.682467    5552 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-736000-m03"
	I0421 19:09:07.682467    5552 start.go:93] Provisioning new machine with config: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName
:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:09:07.682467    5552 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0421 19:09:07.689258    5552 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 19:09:07.689258    5552 start.go:159] libmachine.API.Create for "ha-736000" (driver="hyperv")
	I0421 19:09:07.690055    5552 client.go:168] LocalClient.Create starting
	I0421 19:09:07.690288    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 19:09:07.690710    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:09:07.690710    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:09:07.690865    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 19:09:07.691046    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:09:07.691046    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:09:07.691257    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 19:09:09.715867    5552 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 19:09:09.715867    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:09.715867    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 19:09:11.537427    5552 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 19:09:11.538202    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:11.538202    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:09:13.145780    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:09:13.145931    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:13.146025    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:09:17.015319    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:09:17.015319    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:17.017916    5552 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 19:09:17.520622    5552 main.go:141] libmachine: Creating SSH key...
	I0421 19:09:17.857683    5552 main.go:141] libmachine: Creating VM...
	I0421 19:09:17.857683    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:09:20.902904    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:09:20.902904    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:20.903000    5552 main.go:141] libmachine: Using switch "Default Switch"
	I0421 19:09:20.903130    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:09:22.772522    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:09:22.772940    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:22.772940    5552 main.go:141] libmachine: Creating VHD
	I0421 19:09:22.773052    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 19:09:26.603968    5552 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 719F816D-DDCC-4E80-AF20-44DA9C0C1AFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 19:09:26.604246    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:26.604246    5552 main.go:141] libmachine: Writing magic tar header
	I0421 19:09:26.604246    5552 main.go:141] libmachine: Writing SSH key tar header
	I0421 19:09:26.614990    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 19:09:29.878652    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:29.878652    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:29.878985    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\disk.vhd' -SizeBytes 20000MB
	I0421 19:09:32.490078    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:32.490078    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:32.490206    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-736000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0421 19:09:36.338740    5552 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-736000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 19:09:36.339198    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:36.339258    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-736000-m03 -DynamicMemoryEnabled $false
	I0421 19:09:38.653966    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:38.654735    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:38.654818    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-736000-m03 -Count 2
	I0421 19:09:40.919100    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:40.919100    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:40.919299    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-736000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\boot2docker.iso'
	I0421 19:09:43.596704    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:43.597311    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:43.597311    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-736000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\disk.vhd'
	I0421 19:09:46.374581    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:46.374581    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:46.374581    5552 main.go:141] libmachine: Starting VM...
	I0421 19:09:46.375685    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-736000-m03
	I0421 19:09:49.554310    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:49.554310    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:49.554310    5552 main.go:141] libmachine: Waiting for host to start...
	I0421 19:09:49.554310    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:09:51.854256    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:09:51.854256    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:51.854610    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:09:54.456906    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:54.456906    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:55.459580    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:09:57.712745    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:09:57.712745    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:57.712908    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:00.340712    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:10:00.340788    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:01.347944    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:03.589647    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:03.589647    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:03.589647    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:06.203081    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:10:06.203081    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:07.215456    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:09.479605    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:09.479869    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:09.479869    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:12.094230    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:10:12.094230    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:13.095579    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:15.376631    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:15.376631    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:15.377548    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:18.058043    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:18.058043    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:18.058515    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:20.255704    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:20.256398    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:20.256398    5552 machine.go:94] provisionDockerMachine start ...
	I0421 19:10:20.256508    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:22.475061    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:22.475061    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:22.475211    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:25.193979    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:25.193979    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:25.201194    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:10:25.213916    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:10:25.215063    5552 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:10:25.350574    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:10:25.350665    5552 buildroot.go:166] provisioning hostname "ha-736000-m03"
	I0421 19:10:25.350665    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:27.553312    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:27.553312    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:27.553312    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:30.226083    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:30.226083    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:30.232709    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:10:30.233525    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:10:30.233525    5552 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-736000-m03 && echo "ha-736000-m03" | sudo tee /etc/hostname
	I0421 19:10:30.411718    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-736000-m03
	
	I0421 19:10:30.411718    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:32.597092    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:32.597092    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:32.597092    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:35.269924    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:35.269985    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:35.275801    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:10:35.276885    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:10:35.276885    5552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-736000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-736000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-736000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:10:35.443614    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:10:35.443682    5552 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 19:10:35.443745    5552 buildroot.go:174] setting up certificates
	I0421 19:10:35.443745    5552 provision.go:84] configureAuth start
	I0421 19:10:35.443869    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:37.618718    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:37.618718    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:37.618718    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:40.296423    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:40.296423    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:40.297281    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:42.457901    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:42.458272    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:42.458321    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:45.140113    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:45.140396    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:45.140423    5552 provision.go:143] copyHostCerts
	I0421 19:10:45.140795    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 19:10:45.141129    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 19:10:45.141129    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 19:10:45.141555    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 19:10:45.142855    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 19:10:45.142922    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 19:10:45.142922    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 19:10:45.143449    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 19:10:45.144189    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 19:10:45.144786    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 19:10:45.144853    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 19:10:45.145232    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 19:10:45.146156    5552 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-736000-m03 san=[127.0.0.1 172.27.195.51 ha-736000-m03 localhost minikube]
	I0421 19:10:45.512049    5552 provision.go:177] copyRemoteCerts
	I0421 19:10:45.528591    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:10:45.528591    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:47.755151    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:47.755151    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:47.755796    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:50.447906    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:50.448664    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:50.448664    5552 sshutil.go:53] new ssh client: &{IP:172.27.195.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\id_rsa Username:docker}
	I0421 19:10:50.566846    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0380979s)
	I0421 19:10:50.566846    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 19:10:50.567323    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:10:50.623794    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 19:10:50.624360    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 19:10:50.678245    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 19:10:50.679252    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 19:10:50.731910    5552 provision.go:87] duration metric: took 15.2879941s to configureAuth
	I0421 19:10:50.732075    5552 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:10:50.733051    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:10:50.733165    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:52.942844    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:52.942844    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:52.943730    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:55.648643    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:55.649658    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:55.656699    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:10:55.657252    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:10:55.657252    5552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 19:10:55.803533    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 19:10:55.803637    5552 buildroot.go:70] root file system type: tmpfs
	I0421 19:10:55.803878    5552 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 19:10:55.803908    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:57.973601    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:57.974138    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:57.974269    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:00.742768    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:00.742768    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:00.750624    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:11:00.751180    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:11:00.751476    5552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.203.42"
	Environment="NO_PROXY=172.27.203.42,172.27.196.39"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 19:11:00.936519    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.203.42
	Environment=NO_PROXY=172.27.203.42,172.27.196.39
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 19:11:00.936615    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:03.148859    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:03.148859    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:03.148963    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:05.765285    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:05.766016    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:05.772474    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:11:05.773143    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:11:05.773143    5552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 19:11:08.031174    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 19:11:08.031174    5552 machine.go:97] duration metric: took 47.7744363s to provisionDockerMachine
	I0421 19:11:08.031174    5552 client.go:171] duration metric: took 2m0.3402639s to LocalClient.Create
	I0421 19:11:08.031174    5552 start.go:167] duration metric: took 2m0.3410612s to libmachine.API.Create "ha-736000"
	I0421 19:11:08.031174    5552 start.go:293] postStartSetup for "ha-736000-m03" (driver="hyperv")
	I0421 19:11:08.031174    5552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:11:08.044153    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:11:08.044153    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:10.241010    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:10.241010    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:10.241283    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:12.875614    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:12.875614    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:12.876058    5552 sshutil.go:53] new ssh client: &{IP:172.27.195.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\id_rsa Username:docker}
	I0421 19:11:12.984483    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9402947s)
	I0421 19:11:12.997723    5552 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:11:13.005159    5552 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:11:13.005242    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 19:11:13.005738    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 19:11:13.006589    5552 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 19:11:13.006705    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 19:11:13.021178    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:11:13.051289    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 19:11:13.107445    5552 start.go:296] duration metric: took 5.0762352s for postStartSetup
	I0421 19:11:13.110523    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:15.311678    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:15.311806    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:15.311915    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:17.964950    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:17.964950    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:17.965454    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:11:17.968973    5552 start.go:128] duration metric: took 2m10.285581s to createHost
	I0421 19:11:17.969054    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:20.174976    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:20.175409    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:20.175566    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:22.811708    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:22.811708    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:22.818750    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:11:22.819282    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:11:22.819401    5552 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:11:22.953325    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713726682.960438090
	
	I0421 19:11:22.953325    5552 fix.go:216] guest clock: 1713726682.960438090
	I0421 19:11:22.953325    5552 fix.go:229] Guest: 2024-04-21 19:11:22.96043809 +0000 UTC Remote: 2024-04-21 19:11:17.9690544 +0000 UTC m=+587.387973001 (delta=4.99138369s)
	I0421 19:11:22.953325    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:25.175577    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:25.175894    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:25.175894    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:27.902734    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:27.903335    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:27.909663    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:11:27.910362    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:11:27.910541    5552 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713726682
	I0421 19:11:28.059587    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 19:11:22 UTC 2024
	
	I0421 19:11:28.059587    5552 fix.go:236] clock set: Sun Apr 21 19:11:22 UTC 2024
	 (err=<nil>)
	I0421 19:11:28.059715    5552 start.go:83] releasing machines lock for "ha-736000-m03", held for 2m20.3762513s
	I0421 19:11:28.059944    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:30.242853    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:30.242853    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:30.242853    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:32.852757    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:32.853232    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:32.856146    5552 out.go:177] * Found network options:
	I0421 19:11:32.859005    5552 out.go:177]   - NO_PROXY=172.27.203.42,172.27.196.39
	W0421 19:11:32.862359    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:11:32.862500    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 19:11:32.864926    5552 out.go:177]   - NO_PROXY=172.27.203.42,172.27.196.39
	W0421 19:11:32.868634    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:11:32.868634    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:11:32.870335    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:11:32.870335    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 19:11:32.873339    5552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:11:32.873339    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:32.884816    5552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 19:11:32.885675    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:35.100582    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:35.100582    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:35.100699    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:35.101361    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:35.101438    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:35.101536    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:37.870410    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:37.870493    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:37.871028    5552 sshutil.go:53] new ssh client: &{IP:172.27.195.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\id_rsa Username:docker}
	I0421 19:11:37.902765    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:37.903771    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:37.904279    5552 sshutil.go:53] new ssh client: &{IP:172.27.195.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\id_rsa Username:docker}
	I0421 19:11:38.105993    5552 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2202817s)
	W0421 19:11:38.106113    5552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:11:38.106113    5552 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2327367s)
	I0421 19:11:38.119476    5552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:11:38.154867    5552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:11:38.154947    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:11:38.155218    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:11:38.211111    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 19:11:38.250532    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 19:11:38.273007    5552 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 19:11:38.287474    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 19:11:38.326937    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:11:38.367033    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 19:11:38.404653    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:11:38.441342    5552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:11:38.477943    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 19:11:38.514150    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 19:11:38.551546    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 19:11:38.587476    5552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:11:38.624901    5552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:11:38.661292    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:38.895286    5552 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 19:11:38.935311    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:11:38.950317    5552 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 19:11:38.990529    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:11:39.035183    5552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:11:39.081689    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:11:39.122799    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:11:39.168883    5552 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 19:11:39.245304    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:11:39.276709    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:11:39.339393    5552 ssh_runner.go:195] Run: which cri-dockerd
	I0421 19:11:39.364856    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 19:11:39.394447    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 19:11:39.457803    5552 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 19:11:39.689555    5552 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 19:11:39.922031    5552 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 19:11:39.922096    5552 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 19:11:39.977343    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:40.223560    5552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 19:11:42.828318    5552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6047395s)
	I0421 19:11:42.841570    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 19:11:42.884778    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:11:42.923470    5552 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 19:11:43.151931    5552 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 19:11:43.395400    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:43.628473    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 19:11:43.679911    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:11:43.721368    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:43.959012    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 19:11:44.093650    5552 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 19:11:44.108208    5552 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 19:11:44.119740    5552 start.go:562] Will wait 60s for crictl version
	I0421 19:11:44.132875    5552 ssh_runner.go:195] Run: which crictl
	I0421 19:11:44.156967    5552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:11:44.222612    5552 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 19:11:44.234163    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:11:44.282497    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:11:44.324197    5552 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 19:11:44.327748    5552 out.go:177]   - env NO_PROXY=172.27.203.42
	I0421 19:11:44.330372    5552 out.go:177]   - env NO_PROXY=172.27.203.42,172.27.196.39
	I0421 19:11:44.334106    5552 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 19:11:44.339969    5552 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 19:11:44.339969    5552 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 19:11:44.339969    5552 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 19:11:44.339969    5552 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 19:11:44.343570    5552 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 19:11:44.343570    5552 ip.go:210] interface addr: 172.27.192.1/20
	I0421 19:11:44.359491    5552 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 19:11:44.366910    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:11:44.395045    5552 mustload.go:65] Loading cluster: ha-736000
	I0421 19:11:44.395815    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:11:44.396046    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:11:46.575258    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:46.575258    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:46.575258    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:11:46.576252    5552 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000 for IP: 172.27.195.51
	I0421 19:11:46.576252    5552 certs.go:194] generating shared ca certs ...
	I0421 19:11:46.576252    5552 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:11:46.577136    5552 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 19:11:46.577452    5552 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 19:11:46.577635    5552 certs.go:256] generating profile certs ...
	I0421 19:11:46.578307    5552 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key
	I0421 19:11:46.578486    5552 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.d68ed736
	I0421 19:11:46.578486    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.d68ed736 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.203.42 172.27.196.39 172.27.195.51 172.27.207.254]
	I0421 19:11:47.001958    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.d68ed736 ...
	I0421 19:11:47.001958    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.d68ed736: {Name:mka7cd24961d014aa09bdc5f5ea7b50c20452ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:11:47.002980    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.d68ed736 ...
	I0421 19:11:47.002980    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.d68ed736: {Name:mk3ecac3bc96e5743192beddc441181563013b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:11:47.003644    5552 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.d68ed736 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt
	I0421 19:11:47.015695    5552 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.d68ed736 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key
	I0421 19:11:47.016729    5552 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key
	I0421 19:11:47.016729    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 19:11:47.017746    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 19:11:47.018186    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 19:11:47.018353    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 19:11:47.018389    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 19:11:47.018624    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 19:11:47.018825    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 19:11:47.018825    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 19:11:47.019926    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 19:11:47.020199    5552 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 19:11:47.020199    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 19:11:47.020631    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 19:11:47.020902    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 19:11:47.020902    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 19:11:47.021633    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 19:11:47.021894    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 19:11:47.022085    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:11:47.022353    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 19:11:47.022607    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:11:49.241421    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:49.241421    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:49.242471    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:51.909383    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:11:51.909383    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:51.911090    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:11:52.020886    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0421 19:11:52.029003    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0421 19:11:52.069530    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0421 19:11:52.077201    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0421 19:11:52.115362    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0421 19:11:52.124317    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0421 19:11:52.162603    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0421 19:11:52.170344    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0421 19:11:52.207334    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0421 19:11:52.216824    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0421 19:11:52.254184    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0421 19:11:52.263523    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0421 19:11:52.288347    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:11:52.340680    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:11:52.395838    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:11:52.452113    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:11:52.507456    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0421 19:11:52.559742    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 19:11:52.609854    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:11:52.664891    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 19:11:52.718848    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 19:11:52.770633    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:11:52.823237    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 19:11:52.876196    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0421 19:11:52.914240    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0421 19:11:52.950347    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0421 19:11:52.988724    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0421 19:11:53.023476    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0421 19:11:53.060608    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0421 19:11:53.095210    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0421 19:11:53.143909    5552 ssh_runner.go:195] Run: openssl version
	I0421 19:11:53.168848    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 19:11:53.209760    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 19:11:53.217324    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 19:11:53.231603    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 19:11:53.254133    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:11:53.294240    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:11:53.330979    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:11:53.339089    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:11:53.352501    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:11:53.378315    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:11:53.415652    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 19:11:53.450034    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 19:11:53.459324    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 19:11:53.472193    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 19:11:53.497702    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 19:11:53.536413    5552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:11:53.544339    5552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 19:11:53.544650    5552 kubeadm.go:928] updating node {m03 172.27.195.51 8443 v1.30.0 docker true true} ...
	I0421 19:11:53.544837    5552 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-736000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.195.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:11:53.545039    5552 kube-vip.go:111] generating kube-vip config ...
	I0421 19:11:53.559213    5552 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 19:11:53.589768    5552 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 19:11:53.589768    5552 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 19:11:53.604365    5552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 19:11:53.623473    5552 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 19:11:53.638073    5552 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 19:11:53.658698    5552 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0421 19:11:53.658698    5552 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0421 19:11:53.658698    5552 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0421 19:11:53.658698    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 19:11:53.658698    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 19:11:53.676822    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 19:11:53.678035    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 19:11:53.678035    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:11:53.686698    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 19:11:53.686698    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 19:11:53.686698    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 19:11:53.686698    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 19:11:53.755518    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 19:11:53.770932    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 19:11:53.924544    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 19:11:53.924625    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 19:11:55.095766    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0421 19:11:55.124160    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0421 19:11:55.158961    5552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:11:55.199801    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 19:11:55.254112    5552 ssh_runner.go:195] Run: grep 172.27.207.254	control-plane.minikube.internal$ /etc/hosts
	I0421 19:11:55.261824    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:11:55.299305    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:55.537065    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:11:55.572810    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:11:55.574017    5552 start.go:316] joinCluster: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:def
ault APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.27.195.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false
istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:11:55.574262    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 19:11:55.574262    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:11:57.760251    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:57.760251    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:57.760979    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:12:00.427029    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:12:00.427472    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:12:00.427972    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:12:00.652693    5552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0783946s)
	I0421 19:12:00.652819    5552 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.27.195.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:12:00.652930    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 836nuw.84ejy2nbaoe6fjph --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-736000-m03 --control-plane --apiserver-advertise-address=172.27.195.51 --apiserver-bind-port=8443"
	I0421 19:12:48.252845    5552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 836nuw.84ejy2nbaoe6fjph --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-736000-m03 --control-plane --apiserver-advertise-address=172.27.195.51 --apiserver-bind-port=8443": (47.5995821s)
	I0421 19:12:48.252845    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 19:12:49.111006    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-736000-m03 minikube.k8s.io/updated_at=2024_04_21T19_12_49_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-736000 minikube.k8s.io/primary=false
	I0421 19:12:49.307192    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-736000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0421 19:12:49.471180    5552 start.go:318] duration metric: took 53.8967854s to joinCluster
	I0421 19:12:49.471536    5552 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.27.195.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:12:49.474053    5552 out.go:177] * Verifying Kubernetes components...
	I0421 19:12:49.472358    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:12:49.490050    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:12:49.922744    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:12:49.971201    5552 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:12:49.972657    5552 kapi.go:59] client config for ha-736000: &rest.Config{Host:"https://172.27.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0421 19:12:49.972817    5552 kubeadm.go:477] Overriding stale ClientConfig host https://172.27.207.254:8443 with https://172.27.203.42:8443
	I0421 19:12:49.973841    5552 node_ready.go:35] waiting up to 6m0s for node "ha-736000-m03" to be "Ready" ...
	I0421 19:12:49.973841    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:49.973841    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:49.973841    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:49.973841    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:49.990836    5552 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0421 19:12:50.489315    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:50.489372    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:50.489372    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:50.489372    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:50.494222    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:12:50.976768    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:50.976829    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:50.976829    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:50.976829    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:50.989134    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:12:51.486240    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:51.486240    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:51.486240    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:51.486240    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:51.491617    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:51.977680    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:51.977680    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:51.977680    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:51.977680    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:51.982923    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:51.983451    5552 node_ready.go:53] node "ha-736000-m03" has status "Ready":"False"
	I0421 19:12:52.485213    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:52.485213    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:52.485213    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:52.485213    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:52.498727    5552 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 19:12:52.988785    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:52.988785    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:52.988785    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:52.988785    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:52.993792    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:53.478712    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:53.478778    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:53.478778    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:53.478847    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:53.484323    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:53.983803    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:53.983869    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:53.983869    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:53.983869    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:53.989486    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:53.990334    5552 node_ready.go:53] node "ha-736000-m03" has status "Ready":"False"
	I0421 19:12:54.474842    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:54.474965    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:54.474965    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:54.474965    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:54.481555    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:12:54.981375    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:54.981484    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:54.981553    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:54.981553    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:54.991205    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:12:55.483348    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:55.483348    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:55.483417    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:55.483417    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:55.487857    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:12:55.984741    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:55.984741    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:55.984741    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:55.984741    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:55.994447    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:12:55.994623    5552 node_ready.go:53] node "ha-736000-m03" has status "Ready":"False"
	I0421 19:12:56.478132    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:56.478241    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:56.478241    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:56.478342    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:56.495069    5552 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0421 19:12:56.980617    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:56.980617    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:56.980617    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:56.980617    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:56.987054    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:12:57.481427    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:57.481427    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:57.481427    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:57.481427    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:57.489933    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:12:57.985396    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:57.985396    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:57.985396    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:57.985396    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:57.989923    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:12:58.488507    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:58.488507    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:58.488507    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:58.488507    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:58.501162    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:12:58.502807    5552 node_ready.go:53] node "ha-736000-m03" has status "Ready":"False"
	I0421 19:12:58.977764    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:58.977852    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:58.977852    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:58.977852    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:58.983757    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.478805    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:59.478805    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.478805    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.478805    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.491472    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:12:59.492248    5552 node_ready.go:49] node "ha-736000-m03" has status "Ready":"True"
	I0421 19:12:59.492326    5552 node_ready.go:38] duration metric: took 9.5184179s for node "ha-736000-m03" to be "Ready" ...
	I0421 19:12:59.492326    5552 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:12:59.492442    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:12:59.492442    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.492442    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.492528    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.507377    5552 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0421 19:12:59.519864    5552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.519864    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bp9zb
	I0421 19:12:59.519864    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.519864    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.519864    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.525411    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.526602    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:12:59.526602    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.526602    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.526602    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.531509    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:12:59.532525    5552 pod_ready.go:92] pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace has status "Ready":"True"
	I0421 19:12:59.532568    5552 pod_ready.go:81] duration metric: took 12.7037ms for pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.532568    5552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.532631    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kv8pq
	I0421 19:12:59.532761    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.532761    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.532761    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.541119    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:12:59.541119    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:12:59.541119    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.541119    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.541119    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.547083    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.548211    5552 pod_ready.go:92] pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace has status "Ready":"True"
	I0421 19:12:59.548273    5552 pod_ready.go:81] duration metric: took 15.7053ms for pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.548273    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.548453    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000
	I0421 19:12:59.548453    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.548453    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.548453    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.551066    5552 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 19:12:59.552038    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:12:59.552038    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.552038    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.552038    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.557583    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.558178    5552 pod_ready.go:92] pod "etcd-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:12:59.558178    5552 pod_ready.go:81] duration metric: took 9.9053ms for pod "etcd-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.558178    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.558178    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m02
	I0421 19:12:59.558178    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.558178    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.558178    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.563831    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.564707    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:12:59.564797    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.564797    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.564797    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.577576    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:12:59.578555    5552 pod_ready.go:92] pod "etcd-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:12:59.578555    5552 pod_ready.go:81] duration metric: took 20.3762ms for pod "etcd-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.578555    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.681260    5552 request.go:629] Waited for 102.5901ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:12:59.681594    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:12:59.681594    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.681594    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.681594    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.688848    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:12:59.887618    5552 request.go:629] Waited for 196.7779ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:59.887735    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:59.887735    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.887735    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.887735    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.894363    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:13:00.090292    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:00.090292    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:00.090543    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:00.090543    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:00.095604    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:00.292791    5552 request.go:629] Waited for 194.7248ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:00.292877    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:00.292877    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:00.292877    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:00.292877    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:00.297232    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:00.590265    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:00.590479    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:00.590479    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:00.590479    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:00.594781    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:00.684871    5552 request.go:629] Waited for 88.5672ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:00.684871    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:00.684871    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:00.685114    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:00.685114    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:00.694992    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:13:01.091329    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:01.091550    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:01.091550    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:01.091550    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:01.096226    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:01.098312    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:01.098428    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:01.098428    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:01.098428    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:01.108435    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:13:01.591021    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:01.591021    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:01.591021    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:01.591021    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:01.595968    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:01.597007    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:01.597007    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:01.597007    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:01.597007    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:01.601270    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:01.602595    5552 pod_ready.go:102] pod "etcd-ha-736000-m03" in "kube-system" namespace has status "Ready":"False"
	I0421 19:13:02.092054    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:02.092109    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.092109    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.092109    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.098066    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:02.099269    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:02.099348    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.099348    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.099348    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.103372    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:13:02.103372    5552 pod_ready.go:92] pod "etcd-ha-736000-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:02.103946    5552 pod_ready.go:81] duration metric: took 2.5247994s for pod "etcd-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.103997    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.104110    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000
	I0421 19:13:02.104110    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.104167    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.104167    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.107532    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:13:02.280133    5552 request.go:629] Waited for 170.4314ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:02.280133    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:02.280133    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.280133    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.280133    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.284853    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:02.286537    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:02.286624    5552 pod_ready.go:81] duration metric: took 182.626ms for pod "kube-apiserver-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.286624    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.482588    5552 request.go:629] Waited for 195.7346ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:13:02.482760    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:13:02.482814    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.482814    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.482843    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.487127    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:02.685101    5552 request.go:629] Waited for 196.0591ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:02.685339    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:02.685339    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.685339    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.685339    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.689393    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:02.691472    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:02.691531    5552 pod_ready.go:81] duration metric: took 404.9042ms for pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.691531    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.891246    5552 request.go:629] Waited for 199.4436ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m03
	I0421 19:13:02.891349    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m03
	I0421 19:13:02.891349    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.891349    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.891349    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.899722    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:13:03.079508    5552 request.go:629] Waited for 178.9394ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:03.079508    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:03.079791    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.079791    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.079791    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.084133    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:03.084133    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:03.084133    5552 pod_ready.go:81] duration metric: took 392.5985ms for pod "kube-apiserver-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.084133    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.281183    5552 request.go:629] Waited for 196.7875ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000
	I0421 19:13:03.281580    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000
	I0421 19:13:03.281626    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.281688    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.281688    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.287629    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:03.486814    5552 request.go:629] Waited for 197.3532ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:03.487058    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:03.487103    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.487103    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.487103    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.492259    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:03.493654    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:03.493654    5552 pod_ready.go:81] duration metric: took 409.5181ms for pod "kube-controller-manager-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.493654    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.689687    5552 request.go:629] Waited for 194.9778ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m02
	I0421 19:13:03.689861    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m02
	I0421 19:13:03.689861    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.689914    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.689914    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.694919    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:03.891348    5552 request.go:629] Waited for 194.3314ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:03.891576    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:03.891576    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.891691    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.891691    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.898622    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:13:03.899515    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:03.899515    5552 pod_ready.go:81] duration metric: took 405.8587ms for pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.899515    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.080715    5552 request.go:629] Waited for 181.0088ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m03
	I0421 19:13:04.080715    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m03
	I0421 19:13:04.080715    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.080715    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.080715    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.086682    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:04.285954    5552 request.go:629] Waited for 198.318ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:04.286252    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:04.286379    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.286379    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.286379    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.293773    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:13:04.294646    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:04.294646    5552 pod_ready.go:81] duration metric: took 395.1285ms for pod "kube-controller-manager-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.294646    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blktz" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.487477    5552 request.go:629] Waited for 192.0745ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-blktz
	I0421 19:13:04.487640    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-blktz
	I0421 19:13:04.487640    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.487640    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.487640    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.492946    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:04.691108    5552 request.go:629] Waited for 196.2906ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:04.691349    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:04.691349    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.691349    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.691527    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.697080    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:04.697943    5552 pod_ready.go:92] pod "kube-proxy-blktz" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:04.697943    5552 pod_ready.go:81] duration metric: took 403.2938ms for pod "kube-proxy-blktz" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.697943    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqs5h" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.879325    5552 request.go:629] Waited for 181.2322ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqs5h
	I0421 19:13:04.879706    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqs5h
	I0421 19:13:04.879706    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.879706    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.879706    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.885603    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:05.083361    5552 request.go:629] Waited for 196.4332ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:05.083480    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:05.083536    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.083575    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.083575    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.088972    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:05.090315    5552 pod_ready.go:92] pod "kube-proxy-pqs5h" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:05.090315    5552 pod_ready.go:81] duration metric: took 392.3693ms for pod "kube-proxy-pqs5h" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.090904    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tj6tp" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.286010    5552 request.go:629] Waited for 194.9558ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tj6tp
	I0421 19:13:05.286288    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tj6tp
	I0421 19:13:05.286414    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.286414    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.286414    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.291166    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:05.489639    5552 request.go:629] Waited for 197.2532ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:05.490038    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:05.490038    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.490104    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.490134    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.494644    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:05.496101    5552 pod_ready.go:92] pod "kube-proxy-tj6tp" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:05.496201    5552 pod_ready.go:81] duration metric: took 405.2948ms for pod "kube-proxy-tj6tp" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.496201    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.694346    5552 request.go:629] Waited for 198.0483ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000
	I0421 19:13:05.694658    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000
	I0421 19:13:05.694658    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.694658    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.694658    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.704810    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:13:05.883545    5552 request.go:629] Waited for 177.5476ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:05.883545    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:05.883860    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.883860    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.883860    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.893193    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:13:05.894494    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:05.894610    5552 pod_ready.go:81] duration metric: took 398.406ms for pod "kube-scheduler-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.894610    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:06.088172    5552 request.go:629] Waited for 193.0961ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m02
	I0421 19:13:06.088353    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m02
	I0421 19:13:06.088353    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.088418    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.088630    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.093788    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:06.289812    5552 request.go:629] Waited for 194.2555ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:06.289812    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:06.289812    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.289812    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.289812    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.294394    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:06.295766    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:06.295766    5552 pod_ready.go:81] duration metric: took 401.0413ms for pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:06.295766    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:06.493994    5552 request.go:629] Waited for 198.2264ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m03
	I0421 19:13:06.494563    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m03
	I0421 19:13:06.494563    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.494563    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.494563    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.500123    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:06.679504    5552 request.go:629] Waited for 178.3582ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:06.679823    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:06.679823    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.679957    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.679957    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.686603    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:13:06.687802    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:06.687802    5552 pod_ready.go:81] duration metric: took 392.0332ms for pod "kube-scheduler-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:06.687802    5552 pod_ready.go:38] duration metric: took 7.1954258s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:13:06.687802    5552 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:13:06.702651    5552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:13:06.735599    5552 api_server.go:72] duration metric: took 17.2639418s to wait for apiserver process to appear ...
	I0421 19:13:06.735706    5552 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:13:06.735706    5552 api_server.go:253] Checking apiserver healthz at https://172.27.203.42:8443/healthz ...
	I0421 19:13:06.745366    5552 api_server.go:279] https://172.27.203.42:8443/healthz returned 200:
	ok
	I0421 19:13:06.745366    5552 round_trippers.go:463] GET https://172.27.203.42:8443/version
	I0421 19:13:06.745366    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.745366    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.745366    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.747464    5552 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 19:13:06.748125    5552 api_server.go:141] control plane version: v1.30.0
	I0421 19:13:06.748190    5552 api_server.go:131] duration metric: took 12.4843ms to wait for apiserver health ...
	I0421 19:13:06.748244    5552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:13:06.882701    5552 request.go:629] Waited for 134.1595ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:13:06.882822    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:13:06.882822    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.882822    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.883018    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.893728    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:13:06.905783    5552 system_pods.go:59] 24 kube-system pods found
	I0421 19:13:06.905783    5552 system_pods.go:61] "coredns-7db6d8ff4d-bp9zb" [7da3275b-72ab-4de1-b829-77f0cd23dc23] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "coredns-7db6d8ff4d-kv8pq" [d4be5bff-5bb6-4ddf-8eff-ccc817c9ad36] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "etcd-ha-736000" [06a6dbb9-b777-4098-aa92-6df6049b0503] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "etcd-ha-736000-m02" [6c50f3fb-a82f-43ff-80f7-43d070388780] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "etcd-ha-736000-m03" [4b774b33-bf9e-450a-8b4a-0b6146e19ce9] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kindnet-7j6mw" [830e1e63-992c-4e19-9aff-4d0f3be10d13] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kindnet-hcfln" [56443347-dfaf-443f-9014-e19cb654b235] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kindnet-wwkr9" [29feeeec-236b-4fe2-bd95-2df787f4851a] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-apiserver-ha-736000" [6355d12a-d611-44f5-9bfe-725d2ee23d0d] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-apiserver-ha-736000-m02" [4531a2aa-6aff-4d31-a1b4-55837f998ba0] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-apiserver-ha-736000-m03" [06d38aa2-774f-4276-915a-2b28029132e2] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-controller-manager-ha-736000" [622072f3-a063-476a-a7b2-45f44ebb0d06] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-controller-manager-ha-736000-m02" [24d7539e-0271-4f8a-8fb6-d8a4e19eacb1] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-controller-manager-ha-736000-m03" [ca1a34ce-37d8-4066-b411-6ada78b6741d] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-proxy-blktz" [bbad68d6-1ee4-4c58-8cdc-aa091eec6a90] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-proxy-pqs5h" [45dbccc2-56ec-4ef9-9daf-1b47ebf39c42] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-proxy-tj6tp" [47ae652e-af60-4a64-a218-3d2fadfe7b0f] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-scheduler-ha-736000" [3980d4e4-3d76-4b51-9da9-9d7b270b6f26] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-scheduler-ha-736000-m02" [4b8301ab-82ac-4660-9f54-41c8ccd7149f] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-scheduler-ha-736000-m03" [57c9bb2f-dbf6-489e-a2ad-686b5cdbb090] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-vip-ha-736000" [10075b63-2468-4380-9528-ad61710a05ec] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-vip-ha-736000-m02" [0577cb35-8275-4c27-a8eb-8176c4e198bb] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-vip-ha-736000-m03" [59d91112-5b6a-486a-bc8f-f3613243482d] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "storage-provisioner" [f51763ff-64ac-4583-86bf-32ec3dfcb438] Running
	I0421 19:13:06.905783    5552 system_pods.go:74] duration metric: took 157.5377ms to wait for pod list to return data ...
	I0421 19:13:06.905783    5552 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:13:07.086904    5552 request.go:629] Waited for 181.119ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/default/serviceaccounts
	I0421 19:13:07.086904    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/default/serviceaccounts
	I0421 19:13:07.086904    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:07.086904    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:07.086904    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:07.093514    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:13:07.093514    5552 default_sa.go:45] found service account: "default"
	I0421 19:13:07.093514    5552 default_sa.go:55] duration metric: took 187.7289ms for default service account to be created ...
	I0421 19:13:07.093514    5552 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:13:07.291722    5552 request.go:629] Waited for 198.0046ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:13:07.291839    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:13:07.291839    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:07.291839    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:07.291839    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:07.302350    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:13:07.313365    5552 system_pods.go:86] 24 kube-system pods found
	I0421 19:13:07.313365    5552 system_pods.go:89] "coredns-7db6d8ff4d-bp9zb" [7da3275b-72ab-4de1-b829-77f0cd23dc23] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "coredns-7db6d8ff4d-kv8pq" [d4be5bff-5bb6-4ddf-8eff-ccc817c9ad36] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "etcd-ha-736000" [06a6dbb9-b777-4098-aa92-6df6049b0503] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "etcd-ha-736000-m02" [6c50f3fb-a82f-43ff-80f7-43d070388780] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "etcd-ha-736000-m03" [4b774b33-bf9e-450a-8b4a-0b6146e19ce9] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kindnet-7j6mw" [830e1e63-992c-4e19-9aff-4d0f3be10d13] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kindnet-hcfln" [56443347-dfaf-443f-9014-e19cb654b235] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kindnet-wwkr9" [29feeeec-236b-4fe2-bd95-2df787f4851a] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-apiserver-ha-736000" [6355d12a-d611-44f5-9bfe-725d2ee23d0d] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-apiserver-ha-736000-m02" [4531a2aa-6aff-4d31-a1b4-55837f998ba0] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-apiserver-ha-736000-m03" [06d38aa2-774f-4276-915a-2b28029132e2] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-controller-manager-ha-736000" [622072f3-a063-476a-a7b2-45f44ebb0d06] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-controller-manager-ha-736000-m02" [24d7539e-0271-4f8a-8fb6-d8a4e19eacb1] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-controller-manager-ha-736000-m03" [ca1a34ce-37d8-4066-b411-6ada78b6741d] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-proxy-blktz" [bbad68d6-1ee4-4c58-8cdc-aa091eec6a90] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-proxy-pqs5h" [45dbccc2-56ec-4ef9-9daf-1b47ebf39c42] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-proxy-tj6tp" [47ae652e-af60-4a64-a218-3d2fadfe7b0f] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-scheduler-ha-736000" [3980d4e4-3d76-4b51-9da9-9d7b270b6f26] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-scheduler-ha-736000-m02" [4b8301ab-82ac-4660-9f54-41c8ccd7149f] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-scheduler-ha-736000-m03" [57c9bb2f-dbf6-489e-a2ad-686b5cdbb090] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-vip-ha-736000" [10075b63-2468-4380-9528-ad61710a05ec] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-vip-ha-736000-m02" [0577cb35-8275-4c27-a8eb-8176c4e198bb] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-vip-ha-736000-m03" [59d91112-5b6a-486a-bc8f-f3613243482d] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "storage-provisioner" [f51763ff-64ac-4583-86bf-32ec3dfcb438] Running
	I0421 19:13:07.314302    5552 system_pods.go:126] duration metric: took 220.7873ms to wait for k8s-apps to be running ...
	I0421 19:13:07.314302    5552 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:13:07.329090    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:13:07.357772    5552 system_svc.go:56] duration metric: took 43.4688ms WaitForService to wait for kubelet
	I0421 19:13:07.357772    5552 kubeadm.go:576] duration metric: took 17.8861097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:13:07.357772    5552 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:13:07.492688    5552 request.go:629] Waited for 134.7695ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes
	I0421 19:13:07.492688    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes
	I0421 19:13:07.492688    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:07.492688    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:07.492892    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:07.498219    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:07.499208    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:13:07.499208    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:13:07.499208    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:13:07.499208    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:13:07.499208    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:13:07.499208    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:13:07.499208    5552 node_conditions.go:105] duration metric: took 141.4353ms to run NodePressure ...
	I0421 19:13:07.499208    5552 start.go:240] waiting for startup goroutines ...
	I0421 19:13:07.499208    5552 start.go:254] writing updated cluster config ...
	I0421 19:13:07.515631    5552 ssh_runner.go:195] Run: rm -f paused
	I0421 19:13:07.678623    5552 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 19:13:07.682726    5552 out.go:177] * Done! kubectl is now configured to use "ha-736000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 21 19:05:11 ha-736000 cri-dockerd[1228]: time="2024-04-21T19:05:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/46ca7ef6a52695b8e4a681face76aa44c4ac416272921af3615f27187737296d/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 19:05:11 ha-736000 dockerd[1331]: time="2024-04-21T19:05:11.867756975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 19:05:11 ha-736000 dockerd[1331]: time="2024-04-21T19:05:11.869225877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:05:11 ha-736000 dockerd[1331]: time="2024-04-21T19:05:11.869293877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:05:11 ha-736000 dockerd[1331]: time="2024-04-21T19:05:11.869778178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:05:11 ha-736000 cri-dockerd[1228]: time="2024-04-21T19:05:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc63af3f3c46bd2b841e403b9f74fb80c5f16c1e74c869ab9c14fc4cb097b8cc/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 19:05:12 ha-736000 cri-dockerd[1228]: time="2024-04-21T19:05:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/141288f6eefaef69d0249923a3e00f8646be1d5058c668104dd7b92d71a2e78b/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.427208615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.427443016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.427485017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.427645918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.473721147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.474085049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.474365550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.474693152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:13:46 ha-736000 dockerd[1331]: time="2024-04-21T19:13:46.661229653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 19:13:46 ha-736000 dockerd[1331]: time="2024-04-21T19:13:46.661462254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:13:46 ha-736000 dockerd[1331]: time="2024-04-21T19:13:46.661504754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:13:46 ha-736000 dockerd[1331]: time="2024-04-21T19:13:46.661832954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:13:46 ha-736000 cri-dockerd[1228]: time="2024-04-21T19:13:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/acdf86c89c3e8c324af41a4f457b43e522eda33e2414ccc223e67a72e3a12553/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 21 19:13:48 ha-736000 cri-dockerd[1228]: time="2024-04-21T19:13:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 21 19:13:48 ha-736000 dockerd[1331]: time="2024-04-21T19:13:48.506673734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 19:13:48 ha-736000 dockerd[1331]: time="2024-04-21T19:13:48.506767035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:13:48 ha-736000 dockerd[1331]: time="2024-04-21T19:13:48.506783235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:13:48 ha-736000 dockerd[1331]: time="2024-04-21T19:13:48.506913736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2c8dc2e2ae84d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   acdf86c89c3e8       busybox-fc5497c4f-pnbbn
	6c62393114dc7       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   141288f6eefae       coredns-7db6d8ff4d-kv8pq
	638e6b90760c8       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   bc63af3f3c46b       coredns-7db6d8ff4d-bp9zb
	8fc14347dc613       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   46ca7ef6a5269       storage-provisioner
	67806b4246ae6       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago        Running             kindnet-cni               0                   7c65bef05022a       kindnet-wwkr9
	a9cc5bf6a42d5       a0bf559e280cf                                                                                         9 minutes ago        Running             kube-proxy                0                   6f60e71384698       kube-proxy-pqs5h
	c922d4fe4beb4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     10 minutes ago       Running             kube-vip                  0                   12f9d02462845       kube-vip-ha-736000
	256d65336b19e       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   9b7895f7d7345       etcd-ha-736000
	2b4f4a1077366       c42f13656d0b2                                                                                         10 minutes ago       Running             kube-apiserver            0                   e7717a3630e7c       kube-apiserver-ha-736000
	ee3dd828038f3       c7aad43836fa5                                                                                         10 minutes ago       Running             kube-controller-manager   0                   0c7f2f1bde060       kube-controller-manager-ha-736000
	c4e32eeddc5d0       259c8277fcbbc                                                                                         10 minutes ago       Running             kube-scheduler            0                   6821588bdfb91       kube-scheduler-ha-736000
	
	
	==> coredns [638e6b90760c] <==
	[INFO] 10.244.1.2:46317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.074879373s
	[INFO] 10.244.1.2:44497 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033531012s
	[INFO] 10.244.2.2:37103 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133301s
	[INFO] 10.244.2.2:59848 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000203602s
	[INFO] 10.244.0.4:56770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116801s
	[INFO] 10.244.0.4:59242 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.106295392s
	[INFO] 10.244.1.2:49714 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000262602s
	[INFO] 10.244.1.2:37201 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105s
	[INFO] 10.244.2.2:35465 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078601s
	[INFO] 10.244.2.2:48750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000676s
	[INFO] 10.244.0.4:47753 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.005962638s
	[INFO] 10.244.0.4:38588 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161301s
	[INFO] 10.244.0.4:55794 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000623s
	[INFO] 10.244.0.4:55062 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000404003s
	[INFO] 10.244.0.4:35274 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106201s
	[INFO] 10.244.0.4:33671 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000320102s
	[INFO] 10.244.1.2:54675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001183s
	[INFO] 10.244.1.2:57457 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115101s
	[INFO] 10.244.1.2:59030 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171001s
	[INFO] 10.244.2.2:51204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142501s
	[INFO] 10.244.0.4:53285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000460703s
	[INFO] 10.244.0.4:59478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059601s
	[INFO] 10.244.0.4:60738 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000693s
	[INFO] 10.244.1.2:57081 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000292302s
	[INFO] 10.244.2.2:56624 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147701s
	
	
	==> coredns [6c62393114dc] <==
	[INFO] 10.244.1.2:55146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108s
	[INFO] 10.244.1.2:58020 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001094s
	[INFO] 10.244.2.2:49508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110701s
	[INFO] 10.244.2.2:49267 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000088401s
	[INFO] 10.244.2.2:50616 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000689s
	[INFO] 10.244.2.2:55615 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126901s
	[INFO] 10.244.2.2:50917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000216702s
	[INFO] 10.244.2.2:59737 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000724s
	[INFO] 10.244.0.4:33352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101901s
	[INFO] 10.244.0.4:40067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	[INFO] 10.244.1.2:44122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072201s
	[INFO] 10.244.2.2:42201 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208401s
	[INFO] 10.244.2.2:39977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194501s
	[INFO] 10.244.2.2:47817 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167201s
	[INFO] 10.244.0.4:39376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175601s
	[INFO] 10.244.1.2:58828 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184001s
	[INFO] 10.244.1.2:45992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184502s
	[INFO] 10.244.1.2:56858 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000192802s
	[INFO] 10.244.2.2:35837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000404202s
	[INFO] 10.244.2.2:57867 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129301s
	[INFO] 10.244.2.2:33588 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000341902s
	[INFO] 10.244.0.4:56879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196601s
	[INFO] 10.244.0.4:57921 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	[INFO] 10.244.0.4:44088 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008s
	[INFO] 10.244.0.4:37195 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137201s
	
	
	==> describe nodes <==
	Name:               ha-736000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-736000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-736000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_04_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:04:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-736000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:14:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:14:23 +0000   Sun, 21 Apr 2024 19:04:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:14:23 +0000   Sun, 21 Apr 2024 19:04:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:14:23 +0000   Sun, 21 Apr 2024 19:04:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:14:23 +0000   Sun, 21 Apr 2024 19:05:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.203.42
	  Hostname:    ha-736000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6d9266bf460429381eee461582868fb
	  System UUID:                386751a7-3515-fc4b-adde-e0bf63ba6158
	  Boot ID:                    073f8dcd-ea4d-4254-b5e7-41fa38183661
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pnbbn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 coredns-7db6d8ff4d-bp9zb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m55s
	  kube-system                 coredns-7db6d8ff4d-kv8pq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m55s
	  kube-system                 etcd-ha-736000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-wwkr9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m56s
	  kube-system                 kube-apiserver-ha-736000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-736000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-pqs5h                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-scheduler-ha-736000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-736000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m52s              kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-736000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-736000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-736000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node ha-736000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node ha-736000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node ha-736000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m56s              node-controller  Node ha-736000 event: Registered Node ha-736000 in Controller
	  Normal  NodeReady                9m43s              kubelet          Node ha-736000 status is now: NodeReady
	  Normal  RegisteredNode           5m48s              node-controller  Node ha-736000 event: Registered Node ha-736000 in Controller
	  Normal  RegisteredNode           109s               node-controller  Node ha-736000 event: Registered Node ha-736000 in Controller
	
	
	Name:               ha-736000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-736000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-736000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T19_08_48_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:08:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-736000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:14:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:14:20 +0000   Sun, 21 Apr 2024 19:08:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:14:20 +0000   Sun, 21 Apr 2024 19:08:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:14:20 +0000   Sun, 21 Apr 2024 19:08:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:14:20 +0000   Sun, 21 Apr 2024 19:09:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.196.39
	  Hostname:    ha-736000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 de63462d1a074ebbba129500a0137334
	  System UUID:                192f459d-8063-de45-aa5e-eef009d1631a
	  Boot ID:                    6fea686d-c65d-4b3a-a988-3a4ad32f1726
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cmvt9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 etcd-ha-736000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m5s
	  kube-system                 kindnet-7j6mw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m10s
	  kube-system                 kube-apiserver-ha-736000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-controller-manager-ha-736000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-proxy-tj6tp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-scheduler-ha-736000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-vip-ha-736000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m10s (x8 over 6m10s)  kubelet          Node ha-736000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x8 over 6m10s)  kubelet          Node ha-736000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s (x7 over 6m10s)  kubelet          Node ha-736000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-736000-m02 event: Registered Node ha-736000-m02 in Controller
	  Normal  RegisteredNode           5m48s                  node-controller  Node ha-736000-m02 event: Registered Node ha-736000-m02 in Controller
	  Normal  RegisteredNode           109s                   node-controller  Node ha-736000-m02 event: Registered Node ha-736000-m02 in Controller
	
	
	Name:               ha-736000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-736000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-736000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T19_12_49_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-736000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:14:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:14:12 +0000   Sun, 21 Apr 2024 19:12:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:14:12 +0000   Sun, 21 Apr 2024 19:12:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:14:12 +0000   Sun, 21 Apr 2024 19:12:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:14:12 +0000   Sun, 21 Apr 2024 19:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.195.51
	  Hostname:    ha-736000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e439a3f6e9b141fe9f08cc149b329157
	  System UUID:                000c990c-4060-cf46-bc96-3f05b191c853
	  Boot ID:                    63eca9ab-7fbf-46f1-bc92-b4952a619d28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nttt5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 etcd-ha-736000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m9s
	  kube-system                 kindnet-hcfln                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m12s
	  kube-system                 kube-apiserver-ha-736000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-controller-manager-ha-736000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-proxy-blktz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-scheduler-ha-736000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-vip-ha-736000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m6s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  2m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m12s                  node-controller  Node ha-736000-m03 event: Registered Node ha-736000-m03 in Controller
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m13s)  kubelet          Node ha-736000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m13s)  kubelet          Node ha-736000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m13s)  kubelet          Node ha-736000-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-736000-m03 event: Registered Node ha-736000-m03 in Controller
	  Normal  RegisteredNode           109s                   node-controller  Node ha-736000-m03 event: Registered Node ha-736000-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr21 19:03] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.188275] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Apr21 19:04] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.112990] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.609517] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.236156] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.281585] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.935737] systemd-fstab-generator[1181]: Ignoring "noauto" option for root device
	[  +0.226833] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.236102] systemd-fstab-generator[1205]: Ignoring "noauto" option for root device
	[  +0.345609] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.130515] kauditd_printk_skb: 191 callbacks suppressed
	[ +11.474579] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.126499] kauditd_printk_skb: 4 callbacks suppressed
	[  +3.920488] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +6.986844] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.111623] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.817968] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.641866] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[ +16.104312] kauditd_printk_skb: 17 callbacks suppressed
	[Apr21 19:05] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.059320] kauditd_printk_skb: 4 callbacks suppressed
	[Apr21 19:08] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [256d65336b19] <==
	{"level":"info","ts":"2024-04-21T19:12:45.113335Z","caller":"traceutil/trace.go:171","msg":"trace[409076568] linearizableReadLoop","detail":"{readStateIndex:1759; appliedIndex:1760; }","duration":"126.733107ms","start":"2024-04-21T19:12:44.986586Z","end":"2024-04-21T19:12:45.113319Z","steps":["trace[409076568] 'read index received'  (duration: 126.729607ms)","trace[409076568] 'applied index is now lower than readState.Index'  (duration: 2.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T19:12:45.114112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.483913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-736000-m03\" ","response":"range_response_count:1 size:5359"}
	{"level":"info","ts":"2024-04-21T19:12:45.114149Z","caller":"traceutil/trace.go:171","msg":"trace[1274460924] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-736000-m03; range_end:; response_count:1; response_revision:1579; }","duration":"127.608715ms","start":"2024-04-21T19:12:44.986531Z","end":"2024-04-21T19:12:45.114139Z","steps":["trace[1274460924] 'agreement among raft nodes before linearized reading'  (duration: 127.429313ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T19:12:45.114608Z","caller":"traceutil/trace.go:171","msg":"trace[1941874324] transaction","detail":"{read_only:false; response_revision:1580; number_of_response:1; }","duration":"105.114918ms","start":"2024-04-21T19:12:45.009481Z","end":"2024-04-21T19:12:45.114596Z","steps":["trace[1941874324] 'process raft request'  (duration: 104.962417ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T19:12:45.510654Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"8ae9a3b9f37dd1a5","to":"1c13697d6b052c98","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-21T19:12:45.51078Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1c13697d6b052c98"}
	{"level":"info","ts":"2024-04-21T19:12:45.510799Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"8ae9a3b9f37dd1a5","remote-peer-id":"1c13697d6b052c98"}
	{"level":"info","ts":"2024-04-21T19:12:45.55008Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"8ae9a3b9f37dd1a5","to":"1c13697d6b052c98","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-21T19:12:45.550126Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"8ae9a3b9f37dd1a5","remote-peer-id":"1c13697d6b052c98"}
	{"level":"info","ts":"2024-04-21T19:12:45.556336Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8ae9a3b9f37dd1a5","remote-peer-id":"1c13697d6b052c98"}
	{"level":"warn","ts":"2024-04-21T19:12:45.593293Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"1c13697d6b052c98","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-04-21T19:12:45.619482Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"8ae9a3b9f37dd1a5","remote-peer-id":"1c13697d6b052c98"}
	{"level":"warn","ts":"2024-04-21T19:12:45.798999Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.27.195.51:56838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-21T19:12:45.841759Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.27.195.51:56804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-21T19:12:46.000673Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.27.195.51:56856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-21T19:12:46.003528Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.27.195.51:56860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-04-21T19:12:46.074398Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.27.195.51:56866","server-name":"","error":"read tcp 172.27.203.42:2380->172.27.195.51:56866: read: connection reset by peer"}
	{"level":"warn","ts":"2024-04-21T19:12:46.593294Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"1c13697d6b052c98","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-04-21T19:12:47.592331Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"1c13697d6b052c98","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-04-21T19:12:48.101802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ae9a3b9f37dd1a5 switched to configuration voters=(2023076645006814360 10009711665857024421 10350837345037860488)"}
	{"level":"info","ts":"2024-04-21T19:12:48.102644Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"d8cdbbe0771a6d50","local-member-id":"8ae9a3b9f37dd1a5"}
	{"level":"info","ts":"2024-04-21T19:12:48.102912Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"8ae9a3b9f37dd1a5","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"1c13697d6b052c98"}
	{"level":"info","ts":"2024-04-21T19:14:36.772128Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1096}
	{"level":"info","ts":"2024-04-21T19:14:36.898206Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1096,"took":"125.450107ms","hash":1449586600,"current-db-size-bytes":3657728,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2232320,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-21T19:14:36.898271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1449586600,"revision":1096,"compact-revision":-1}
	
	
	==> kernel <==
	 19:14:53 up 12 min,  0 users,  load average: 0.49, 0.41, 0.24
	Linux ha-736000 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [67806b4246ae] <==
	I0421 19:14:08.303151       1 main.go:250] Node ha-736000-m03 has CIDR [10.244.2.0/24] 
	I0421 19:14:18.318650       1 main.go:223] Handling node with IPs: map[172.27.203.42:{}]
	I0421 19:14:18.318802       1 main.go:227] handling current node
	I0421 19:14:18.319386       1 main.go:223] Handling node with IPs: map[172.27.196.39:{}]
	I0421 19:14:18.319484       1 main.go:250] Node ha-736000-m02 has CIDR [10.244.1.0/24] 
	I0421 19:14:18.319723       1 main.go:223] Handling node with IPs: map[172.27.195.51:{}]
	I0421 19:14:18.319871       1 main.go:250] Node ha-736000-m03 has CIDR [10.244.2.0/24] 
	I0421 19:14:28.331252       1 main.go:223] Handling node with IPs: map[172.27.203.42:{}]
	I0421 19:14:28.331364       1 main.go:227] handling current node
	I0421 19:14:28.331381       1 main.go:223] Handling node with IPs: map[172.27.196.39:{}]
	I0421 19:14:28.331390       1 main.go:250] Node ha-736000-m02 has CIDR [10.244.1.0/24] 
	I0421 19:14:28.332204       1 main.go:223] Handling node with IPs: map[172.27.195.51:{}]
	I0421 19:14:28.332601       1 main.go:250] Node ha-736000-m03 has CIDR [10.244.2.0/24] 
	I0421 19:14:38.350123       1 main.go:223] Handling node with IPs: map[172.27.203.42:{}]
	I0421 19:14:38.350270       1 main.go:227] handling current node
	I0421 19:14:38.350365       1 main.go:223] Handling node with IPs: map[172.27.196.39:{}]
	I0421 19:14:38.350407       1 main.go:250] Node ha-736000-m02 has CIDR [10.244.1.0/24] 
	I0421 19:14:38.350787       1 main.go:223] Handling node with IPs: map[172.27.195.51:{}]
	I0421 19:14:38.350881       1 main.go:250] Node ha-736000-m03 has CIDR [10.244.2.0/24] 
	I0421 19:14:48.359977       1 main.go:223] Handling node with IPs: map[172.27.203.42:{}]
	I0421 19:14:48.360083       1 main.go:227] handling current node
	I0421 19:14:48.360100       1 main.go:223] Handling node with IPs: map[172.27.196.39:{}]
	I0421 19:14:48.360109       1 main.go:250] Node ha-736000-m02 has CIDR [10.244.1.0/24] 
	I0421 19:14:48.360633       1 main.go:223] Handling node with IPs: map[172.27.195.51:{}]
	I0421 19:14:48.360862       1 main.go:250] Node ha-736000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [2b4f4a107736] <==
	I0421 19:04:43.887562       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0421 19:04:57.679186       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0421 19:04:57.938460       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0421 19:12:08.048063       1 trace.go:236] Trace[1644805764]: "Update" accept:application/json, */*,audit-id:cf7585d5-92f6-498a-9a9f-26cb5e5c3c20,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (21-Apr-2024 19:12:07.443) (total time: 604ms):
	Trace[1644805764]: ["GuaranteedUpdate etcd3" audit-id:cf7585d5-92f6-498a-9a9f-26cb5e5c3c20,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 603ms (19:12:07.444)
	Trace[1644805764]:  ---"Txn call completed" 602ms (19:12:08.047)]
	Trace[1644805764]: [604.309222ms] [604.309222ms] END
	E0421 19:12:41.832700       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 24.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0421 19:12:41.832706       1 wrap.go:54] timeout or abort while handling: method=PATCH URI="/api/v1/namespaces/default/events/ha-736000-m03.17c86168c85e7c11" audit-ID="20de03e2-3c6b-4f17-be6c-ed3e91b6feed"
	E0421 19:12:41.832726       1 timeout.go:142] post-timeout activity - time-elapsed: 4.001µs, PATCH "/api/v1/namespaces/default/events/ha-736000-m03.17c86168c85e7c11" result: <nil>
	E0421 19:13:52.593758       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61197: use of closed network connection
	E0421 19:13:53.185784       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61199: use of closed network connection
	E0421 19:13:54.935411       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61201: use of closed network connection
	E0421 19:13:55.549712       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61203: use of closed network connection
	E0421 19:13:56.110005       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61205: use of closed network connection
	E0421 19:13:56.676483       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61207: use of closed network connection
	E0421 19:13:57.263349       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61209: use of closed network connection
	E0421 19:13:57.827408       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61211: use of closed network connection
	E0421 19:13:58.395677       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61213: use of closed network connection
	E0421 19:13:59.434013       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61216: use of closed network connection
	E0421 19:14:10.005641       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61218: use of closed network connection
	E0421 19:14:10.571205       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61222: use of closed network connection
	E0421 19:14:21.141729       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61224: use of closed network connection
	E0421 19:14:21.692696       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61227: use of closed network connection
	E0421 19:14:32.272398       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61229: use of closed network connection
	
	
	==> kube-controller-manager [ee3dd828038f] <==
	I0421 19:05:13.308674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.939012ms"
	I0421 19:05:13.309241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.702µs"
	I0421 19:05:13.332560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="139.402µs"
	I0421 19:05:13.403832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.088655ms"
	I0421 19:05:13.406219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.566025ms"
	I0421 19:08:43.528509       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-736000-m02\" does not exist"
	I0421 19:08:43.544339       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-736000-m02" podCIDRs=["10.244.1.0/24"]
	I0421 19:08:47.174389       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-736000-m02"
	I0421 19:12:40.983106       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-736000-m03\" does not exist"
	I0421 19:12:41.012827       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-736000-m03" podCIDRs=["10.244.2.0/24"]
	I0421 19:12:42.251624       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-736000-m03"
	I0421 19:13:45.864292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.396399ms"
	I0421 19:13:45.935909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.291494ms"
	I0421 19:13:45.936613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.2µs"
	I0421 19:13:45.951115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="296.8µs"
	I0421 19:13:46.182540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="229.620312ms"
	I0421 19:13:46.605251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="422.057876ms"
	I0421 19:13:46.720224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.904957ms"
	I0421 19:13:46.720329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.3µs"
	I0421 19:13:49.055916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.850652ms"
	I0421 19:13:49.057169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.701µs"
	I0421 19:13:49.271105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.126136ms"
	I0421 19:13:49.271220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.001µs"
	I0421 19:13:49.515726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.358441ms"
	I0421 19:13:49.516494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.3µs"
	
	
	==> kube-proxy [a9cc5bf6a42d] <==
	I0421 19:05:01.047163       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:05:01.086071       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.203.42"]
	I0421 19:05:01.144752       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:05:01.145018       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:05:01.145065       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:05:01.160872       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:05:01.162754       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:05:01.162823       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:05:01.175087       1 config.go:192] "Starting service config controller"
	I0421 19:05:01.175201       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:05:01.175229       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:05:01.175235       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:05:01.181076       1 config.go:319] "Starting node config controller"
	I0421 19:05:01.184185       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:05:01.184195       1 shared_informer.go:320] Caches are synced for node config
	I0421 19:05:01.276687       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 19:05:01.276699       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c4e32eeddc5d] <==
	W0421 19:04:40.574664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 19:04:40.575000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 19:04:40.625611       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 19:04:40.625895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 19:04:40.666703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0421 19:04:40.667086       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0421 19:04:40.749175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0421 19:04:40.749280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0421 19:04:40.749769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 19:04:40.749798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 19:04:40.754598       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 19:04:40.754684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 19:04:40.823363       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:04:40.823531       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:04:40.974036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:04:40.974369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:04:41.028360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 19:04:41.028489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 19:04:41.137209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 19:04:41.137352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 19:04:41.153169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 19:04:41.153512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:04:41.166919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 19:04:41.167070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0421 19:04:43.837225       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 19:10:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:10:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:11:44 ha-736000 kubelet[2215]: E0421 19:11:44.023797    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:11:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:11:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:11:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:11:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:12:44 ha-736000 kubelet[2215]: E0421 19:12:44.021980    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:12:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:12:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:12:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:12:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:13:44 ha-736000 kubelet[2215]: E0421 19:13:44.023592    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:13:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:13:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:13:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:13:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:13:45 ha-736000 kubelet[2215]: I0421 19:13:45.881907    2215 topology_manager.go:215] "Topology Admit Handler" podUID="23517c32-496a-41a5-b231-d32d17ca2229" podNamespace="default" podName="busybox-fc5497c4f-pnbbn"
	Apr 21 19:13:45 ha-736000 kubelet[2215]: I0421 19:13:45.975571    2215 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdg5\" (UniqueName: \"kubernetes.io/projected/23517c32-496a-41a5-b231-d32d17ca2229-kube-api-access-4tdg5\") pod \"busybox-fc5497c4f-pnbbn\" (UID: \"23517c32-496a-41a5-b231-d32d17ca2229\") " pod="default/busybox-fc5497c4f-pnbbn"
	Apr 21 19:13:46 ha-736000 kubelet[2215]: I0421 19:13:46.939900    2215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acdf86c89c3e8c324af41a4f457b43e522eda33e2414ccc223e67a72e3a12553"
	Apr 21 19:14:44 ha-736000 kubelet[2215]: E0421 19:14:44.022586    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:14:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:14:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:14:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:14:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 19:14:45.169420    2036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-736000 -n ha-736000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-736000 -n ha-736000: (12.6737408s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-736000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (70.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (86.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 node stop m02 -v=7 --alsologtostderr: (36.224995s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-736000 status -v=7 --alsologtostderr: exit status 1 (13.4967311s)

                                                
                                                
** stderr ** 
	W0421 19:31:17.232927    6864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0421 19:31:17.329030    6864 out.go:291] Setting OutFile to fd 676 ...
	I0421 19:31:17.329030    6864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:31:17.329030    6864 out.go:304] Setting ErrFile to fd 1020...
	I0421 19:31:17.329030    6864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:31:17.347568    6864 out.go:298] Setting JSON to false
	I0421 19:31:17.348152    6864 mustload.go:65] Loading cluster: ha-736000
	I0421 19:31:17.348257    6864 notify.go:220] Checking for updates...
	I0421 19:31:17.349064    6864 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:31:17.349064    6864 status.go:255] checking status of ha-736000 ...
	I0421 19:31:17.350439    6864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:31:19.599593    6864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:31:19.599677    6864 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:31:19.599677    6864 status.go:330] ha-736000 host status = "Running" (err=<nil>)
	I0421 19:31:19.599677    6864 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:31:19.600165    6864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:31:21.864868    6864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:31:21.864868    6864 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:31:21.864868    6864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:31:24.566230    6864 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:31:24.566230    6864 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:31:24.566230    6864 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:31:24.582350    6864 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 19:31:24.582350    6864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:31:26.819186    6864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:31:26.819186    6864 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:31:26.819388    6864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:31:29.549138    6864 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:31:29.550133    6864 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:31:29.550266    6864 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:31:29.661159    6864 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0787722s)
	I0421 19:31:29.675217    6864 ssh_runner.go:195] Run: systemctl --version
	I0421 19:31:29.702303    6864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:31:29.734220    6864 kubeconfig.go:125] found "ha-736000" server: "https://172.27.207.254:8443"
	I0421 19:31:29.734346    6864 api_server.go:166] Checking apiserver status ...
	I0421 19:31:29.748409    6864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:31:29.792367    6864 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2120/cgroup
	W0421 19:31:29.815296    6864 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 19:31:29.829334    6864 ssh_runner.go:195] Run: ls
	I0421 19:31:29.838006    6864 api_server.go:253] Checking apiserver healthz at https://172.27.207.254:8443/healthz ...
	I0421 19:31:29.850681    6864 api_server.go:279] https://172.27.207.254:8443/healthz returned 200:
	ok
	I0421 19:31:29.850681    6864 status.go:422] ha-736000 apiserver status = Running (err=<nil>)
	I0421 19:31:29.850947    6864 status.go:257] ha-736000 status: &{Name:ha-736000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 19:31:29.850947    6864 status.go:255] checking status of ha-736000-m02 ...
	I0421 19:31:29.852022    6864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-736000 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-736000 -n ha-736000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-736000 -n ha-736000: (12.546868s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 logs -n 25: (9.1947977s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:26 UTC | 21 Apr 24 19:26 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:26 UTC | 21 Apr 24 19:26 UTC |
	|         | ha-736000-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:26 UTC | 21 Apr 24 19:26 UTC |
	|         | ha-736000:/home/docker/cp-test_ha-736000-m03_ha-736000.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:26 UTC | 21 Apr 24 19:26 UTC |
	|         | ha-736000-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n ha-736000 sudo cat                                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:26 UTC | 21 Apr 24 19:26 UTC |
	|         | /home/docker/cp-test_ha-736000-m03_ha-736000.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:26 UTC | 21 Apr 24 19:27 UTC |
	|         | ha-736000-m02:/home/docker/cp-test_ha-736000-m03_ha-736000-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:27 UTC | 21 Apr 24 19:27 UTC |
	|         | ha-736000-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n ha-736000-m02 sudo cat                                                                                   | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:27 UTC | 21 Apr 24 19:27 UTC |
	|         | /home/docker/cp-test_ha-736000-m03_ha-736000-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:27 UTC | 21 Apr 24 19:27 UTC |
	|         | ha-736000-m04:/home/docker/cp-test_ha-736000-m03_ha-736000-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:27 UTC | 21 Apr 24 19:28 UTC |
	|         | ha-736000-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n ha-736000-m04 sudo cat                                                                                   | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:28 UTC | 21 Apr 24 19:28 UTC |
	|         | /home/docker/cp-test_ha-736000-m03_ha-736000-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-736000 cp testdata\cp-test.txt                                                                                         | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:28 UTC | 21 Apr 24 19:28 UTC |
	|         | ha-736000-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:28 UTC | 21 Apr 24 19:28 UTC |
	|         | ha-736000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:28 UTC | 21 Apr 24 19:28 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:28 UTC | 21 Apr 24 19:28 UTC |
	|         | ha-736000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:28 UTC | 21 Apr 24 19:29 UTC |
	|         | ha-736000:/home/docker/cp-test_ha-736000-m04_ha-736000.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:29 UTC | 21 Apr 24 19:29 UTC |
	|         | ha-736000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n ha-736000 sudo cat                                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:29 UTC | 21 Apr 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-736000-m04_ha-736000.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:29 UTC | 21 Apr 24 19:29 UTC |
	|         | ha-736000-m02:/home/docker/cp-test_ha-736000-m04_ha-736000-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:29 UTC | 21 Apr 24 19:29 UTC |
	|         | ha-736000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n ha-736000-m02 sudo cat                                                                                   | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:29 UTC | 21 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_ha-736000-m04_ha-736000-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt                                                                       | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:30 UTC | 21 Apr 24 19:30 UTC |
	|         | ha-736000-m03:/home/docker/cp-test_ha-736000-m04_ha-736000-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n                                                                                                          | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:30 UTC | 21 Apr 24 19:30 UTC |
	|         | ha-736000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-736000 ssh -n ha-736000-m03 sudo cat                                                                                   | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:30 UTC | 21 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_ha-736000-m04_ha-736000-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-736000 node stop m02 -v=7                                                                                              | ha-736000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:30 UTC | 21 Apr 24 19:31 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 19:01:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 19:01:30.769155    5552 out.go:291] Setting OutFile to fd 720 ...
	I0421 19:01:30.769155    5552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:01:30.769155    5552 out.go:304] Setting ErrFile to fd 716...
	I0421 19:01:30.769155    5552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 19:01:30.796479    5552 out.go:298] Setting JSON to false
	I0421 19:01:30.799790    5552 start.go:129] hostinfo: {"hostname":"minikube6","uptime":11965,"bootTime":1713714124,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 19:01:30.800827    5552 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 19:01:30.808149    5552 out.go:177] * [ha-736000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 19:01:30.814674    5552 notify.go:220] Checking for updates...
	I0421 19:01:30.817436    5552 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:01:30.819945    5552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 19:01:30.822588    5552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 19:01:30.825285    5552 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 19:01:30.828109    5552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 19:01:30.831698    5552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 19:01:36.351157    5552 out.go:177] * Using the hyperv driver based on user configuration
	I0421 19:01:36.355841    5552 start.go:297] selected driver: hyperv
	I0421 19:01:36.355841    5552 start.go:901] validating driver "hyperv" against <nil>
	I0421 19:01:36.355841    5552 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 19:01:36.419031    5552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 19:01:36.420517    5552 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:01:36.420604    5552 cni.go:84] Creating CNI manager for ""
	I0421 19:01:36.420703    5552 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0421 19:01:36.420703    5552 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0421 19:01:36.420910    5552 start.go:340] cluster config:
	{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:01:36.421221    5552 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 19:01:36.427492    5552 out.go:177] * Starting "ha-736000" primary control-plane node in "ha-736000" cluster
	I0421 19:01:36.430007    5552 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 19:01:36.430007    5552 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 19:01:36.430007    5552 cache.go:56] Caching tarball of preloaded images
	I0421 19:01:36.430620    5552 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 19:01:36.431224    5552 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 19:01:36.431752    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:01:36.432130    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json: {Name:mkc8725b604d2f8b010420e709bf1023daa6f0a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:01:36.433503    5552 start.go:360] acquireMachinesLock for ha-736000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:01:36.433560    5552 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-736000"
	I0421 19:01:36.433560    5552 start.go:93] Provisioning new machine with config: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName
:ha-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:01:36.433560    5552 start.go:125] createHost starting for "" (driver="hyperv")
	I0421 19:01:36.436889    5552 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 19:01:36.436889    5552 start.go:159] libmachine.API.Create for "ha-736000" (driver="hyperv")
	I0421 19:01:36.436889    5552 client.go:168] LocalClient.Create starting
	I0421 19:01:36.437874    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 19:01:36.438457    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:01:36.438500    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:01:36.438593    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 19:01:36.438593    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:01:36.438593    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:01:36.438593    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 19:01:38.648114    5552 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 19:01:38.648114    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:38.648114    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 19:01:40.461059    5552 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 19:01:40.461059    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:40.461767    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:01:41.979624    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:01:41.980211    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:41.980407    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:01:45.668116    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:01:45.668116    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:45.672440    5552 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 19:01:46.241084    5552 main.go:141] libmachine: Creating SSH key...
	I0421 19:01:46.440119    5552 main.go:141] libmachine: Creating VM...
	I0421 19:01:46.440119    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:01:49.369367    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:01:49.370213    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:49.370213    5552 main.go:141] libmachine: Using switch "Default Switch"
	I0421 19:01:49.370213    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:01:51.197008    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:01:51.197217    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:51.197217    5552 main.go:141] libmachine: Creating VHD
	I0421 19:01:51.197398    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 19:01:54.903849    5552 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 359490DA-85DD-4A6F-B5CD-00C97E3B216B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 19:01:54.903849    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:54.903849    5552 main.go:141] libmachine: Writing magic tar header
	I0421 19:01:54.904187    5552 main.go:141] libmachine: Writing SSH key tar header
	I0421 19:01:54.916926    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 19:01:58.145501    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:01:58.145501    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:01:58.145501    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\disk.vhd' -SizeBytes 20000MB
	I0421 19:02:00.767785    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:00.767785    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:00.768637    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-736000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0421 19:02:05.143847    5552 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-736000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 19:02:05.143847    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:05.143847    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-736000 -DynamicMemoryEnabled $false
	I0421 19:02:07.444321    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:07.444321    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:07.444442    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-736000 -Count 2
	I0421 19:02:09.662853    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:09.662853    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:09.663131    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-736000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\boot2docker.iso'
	I0421 19:02:12.273575    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:12.273575    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:12.274348    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-736000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\disk.vhd'
	I0421 19:02:15.014777    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:15.015773    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:15.015773    5552 main.go:141] libmachine: Starting VM...
	I0421 19:02:15.015858    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-736000
	I0421 19:02:18.149400    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:18.149400    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:18.149400    5552 main.go:141] libmachine: Waiting for host to start...
	I0421 19:02:18.150286    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:20.424590    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:20.424590    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:20.424913    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:22.996247    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:22.996247    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:23.999503    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:26.240837    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:26.240837    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:26.240837    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:28.831004    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:28.831004    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:29.840201    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:32.034114    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:32.034114    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:32.034114    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:34.593331    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:34.593331    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:35.595834    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:37.803371    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:37.803371    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:37.804025    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:40.383218    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:02:40.383218    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:41.397866    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:43.634870    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:43.634870    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:43.635192    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:46.302872    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:02:46.302872    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:46.303086    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:48.497182    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:48.497245    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:48.497245    5552 machine.go:94] provisionDockerMachine start ...
	I0421 19:02:48.497245    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:50.686701    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:50.686725    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:50.686725    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:53.275882    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:02:53.276662    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:53.283075    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:02:53.296056    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:02:53.296056    5552 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:02:53.422702    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:02:53.422702    5552 buildroot.go:166] provisioning hostname "ha-736000"
	I0421 19:02:53.422702    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:02:55.576716    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:02:55.576716    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:55.577706    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:02:58.244225    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:02:58.244501    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:02:58.250965    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:02:58.251253    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:02:58.251253    5552 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-736000 && echo "ha-736000" | sudo tee /etc/hostname
	I0421 19:02:58.407008    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-736000
	
	I0421 19:02:58.407167    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:00.569167    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:00.569472    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:00.569472    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:03.155934    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:03.156583    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:03.163362    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:03.163362    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:03.163362    5552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-736000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-736000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-736000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:03:03.319082    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:03:03.319224    5552 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 19:03:03.319340    5552 buildroot.go:174] setting up certificates
	I0421 19:03:03.319340    5552 provision.go:84] configureAuth start
	I0421 19:03:03.319414    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:05.512506    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:05.512811    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:05.512811    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:08.083232    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:08.084138    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:08.084233    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:10.283567    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:10.283567    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:10.283751    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:12.941557    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:12.942342    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:12.942342    5552 provision.go:143] copyHostCerts
	I0421 19:03:12.942448    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 19:03:12.942448    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 19:03:12.942448    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 19:03:12.943162    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 19:03:12.943970    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 19:03:12.944611    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 19:03:12.944682    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 19:03:12.944772    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 19:03:12.945586    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 19:03:12.946349    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 19:03:12.946349    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 19:03:12.946349    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 19:03:12.947801    5552 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-736000 san=[127.0.0.1 172.27.203.42 ha-736000 localhost minikube]
	I0421 19:03:13.157449    5552 provision.go:177] copyRemoteCerts
	I0421 19:03:13.171734    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:03:13.171734    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:15.350114    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:15.350571    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:15.350631    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:17.956945    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:17.956945    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:17.958289    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:03:18.067148    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8953788s)
	I0421 19:03:18.067148    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 19:03:18.068300    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:03:18.118669    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 19:03:18.119202    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0421 19:03:18.169135    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 19:03:18.169621    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 19:03:18.220506    5552 provision.go:87] duration metric: took 14.9009391s to configureAuth
	I0421 19:03:18.220589    5552 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:03:18.221246    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:03:18.221353    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:20.393237    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:20.393237    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:20.393237    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:22.986829    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:22.986829    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:22.993119    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:22.993717    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:22.993717    5552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 19:03:23.123178    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 19:03:23.123347    5552 buildroot.go:70] root file system type: tmpfs
	I0421 19:03:23.123480    5552 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 19:03:23.123480    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:25.297965    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:25.297965    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:25.298376    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:27.908269    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:27.908816    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:27.917771    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:27.917771    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:27.918687    5552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 19:03:28.086293    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 19:03:28.086415    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:30.241996    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:30.241996    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:30.243013    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:32.865415    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:32.865415    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:32.874077    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:32.874077    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:32.874077    5552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 19:03:35.138753    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 19:03:35.138753    5552 machine.go:97] duration metric: took 46.6411776s to provisionDockerMachine
	I0421 19:03:35.138753    5552 client.go:171] duration metric: took 1m58.700076s to LocalClient.Create
	I0421 19:03:35.139299    5552 start.go:167] duration metric: took 1m58.7015668s to libmachine.API.Create "ha-736000"
	I0421 19:03:35.139443    5552 start.go:293] postStartSetup for "ha-736000" (driver="hyperv")
	I0421 19:03:35.139486    5552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:03:35.151604    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:03:35.151604    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:37.257505    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:37.258393    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:37.258393    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:39.854375    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:39.854375    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:39.854375    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:03:39.971809    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8201705s)
	I0421 19:03:39.985257    5552 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:03:39.993101    5552 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:03:39.993101    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 19:03:39.993646    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 19:03:39.993907    5552 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 19:03:39.994510    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 19:03:40.009904    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:03:40.036626    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 19:03:40.090082    5552 start.go:296] duration metric: took 4.9506043s for postStartSetup
	I0421 19:03:40.093073    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:42.281212    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:42.281212    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:42.281299    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:44.906602    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:44.907583    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:44.907583    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:03:44.910691    5552 start.go:128] duration metric: took 2m8.4759703s to createHost
	I0421 19:03:44.910842    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:47.046407    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:47.046407    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:47.046407    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:49.615625    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:49.615625    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:49.621538    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:49.621897    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:49.621897    5552 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:03:49.745934    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713726229.759913049
	
	I0421 19:03:49.745934    5552 fix.go:216] guest clock: 1713726229.759913049
	I0421 19:03:49.745934    5552 fix.go:229] Guest: 2024-04-21 19:03:49.759913049 +0000 UTC Remote: 2024-04-21 19:03:44.9107404 +0000 UTC m=+134.332875701 (delta=4.849172649s)
	I0421 19:03:49.745934    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:51.864353    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:51.864818    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:51.864894    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:54.502136    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:54.502136    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:54.508331    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:03:54.509135    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.203.42 22 <nil> <nil>}
	I0421 19:03:54.509135    5552 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713726229
	I0421 19:03:54.657251    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 19:03:49 UTC 2024
	
	I0421 19:03:54.657323    5552 fix.go:236] clock set: Sun Apr 21 19:03:49 UTC 2024
	 (err=<nil>)
	I0421 19:03:54.657323    5552 start.go:83] releasing machines lock for "ha-736000", held for 2m18.2227814s
	I0421 19:03:54.657508    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:56.805638    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:03:56.805818    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:56.805897    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:03:59.454196    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:03:59.454246    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:03:59.458530    5552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:03:59.458736    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:03:59.470839    5552 ssh_runner.go:195] Run: cat /version.json
	I0421 19:03:59.471878    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:04:01.617429    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:04:01.617429    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:01.617429    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:04:01.660528    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:04:01.660629    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:01.660691    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:04:04.338128    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:04:04.338128    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:04.338370    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:04:04.363845    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:04:04.363845    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:04.364484    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:04:04.588205    5552 ssh_runner.go:235] Completed: cat /version.json: (5.1173293s)
	I0421 19:04:04.588205    5552 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1295592s)
	I0421 19:04:04.601494    5552 ssh_runner.go:195] Run: systemctl --version
	I0421 19:04:04.625794    5552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 19:04:04.635564    5552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:04:04.649566    5552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:04:04.682420    5552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:04:04.682420    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:04:04.682420    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:04:04.737471    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 19:04:04.776605    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 19:04:04.800286    5552 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 19:04:04.815490    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 19:04:04.859890    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:04:04.898481    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 19:04:04.937377    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:04:04.974608    5552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:04:05.011637    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 19:04:05.049390    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 19:04:05.087507    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 19:04:05.122971    5552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:04:05.158158    5552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:04:05.190111    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:05.409983    5552 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 19:04:05.448087    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:04:05.466371    5552 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 19:04:05.508677    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:04:05.549184    5552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:04:05.598844    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:04:05.638574    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:04:05.678738    5552 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 19:04:05.751004    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:04:05.778161    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:04:05.828941    5552 ssh_runner.go:195] Run: which cri-dockerd
	I0421 19:04:05.854363    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 19:04:05.875126    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 19:04:05.924396    5552 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 19:04:06.147509    5552 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 19:04:06.381492    5552 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 19:04:06.381720    5552 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 19:04:06.432949    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:06.657792    5552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 19:04:09.243873    5552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5860634s)
	I0421 19:04:09.259176    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 19:04:09.305872    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:04:09.350758    5552 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 19:04:09.586686    5552 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 19:04:09.819494    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:10.056078    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 19:04:10.110920    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:04:10.151889    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:10.408280    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 19:04:10.526327    5552 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 19:04:10.540833    5552 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 19:04:10.560112    5552 start.go:562] Will wait 60s for crictl version
	I0421 19:04:10.583646    5552 ssh_runner.go:195] Run: which crictl
	I0421 19:04:10.605954    5552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:04:10.670354    5552 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 19:04:10.683529    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:04:10.732566    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:04:10.772015    5552 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 19:04:10.772015    5552 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 19:04:10.778871    5552 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 19:04:10.778871    5552 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 19:04:10.778871    5552 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 19:04:10.778871    5552 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 19:04:10.781263    5552 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 19:04:10.781263    5552 ip.go:210] interface addr: 172.27.192.1/20
	I0421 19:04:10.795495    5552 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 19:04:10.803012    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:04:10.850333    5552 kubeadm.go:877] updating cluster {Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespac
e:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 19:04:10.850333    5552 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 19:04:10.859861    5552 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 19:04:10.884882    5552 docker.go:685] Got preloaded images: 
	I0421 19:04:10.884882    5552 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0421 19:04:10.898923    5552 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0421 19:04:10.936461    5552 ssh_runner.go:195] Run: which lz4
	I0421 19:04:10.943680    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0421 19:04:10.969878    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 19:04:10.978320    5552 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 19:04:10.978554    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0421 19:04:13.083229    5552 docker.go:649] duration metric: took 2.1288251s to copy over tarball
	I0421 19:04:13.096507    5552 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 19:04:21.615659    5552 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5190921s)
	I0421 19:04:21.616198    5552 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 19:04:21.703198    5552 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0421 19:04:21.723346    5552 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0421 19:04:21.769975    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:22.014696    5552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 19:04:25.404114    5552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3881643s)
	I0421 19:04:25.415866    5552 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 19:04:25.445212    5552 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0421 19:04:25.445307    5552 cache_images.go:84] Images are preloaded, skipping loading
	I0421 19:04:25.445307    5552 kubeadm.go:928] updating node { 172.27.203.42 8443 v1.30.0 docker true true} ...
	I0421 19:04:25.445475    5552 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-736000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.203.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:04:25.456052    5552 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0421 19:04:25.497804    5552 cni.go:84] Creating CNI manager for ""
	I0421 19:04:25.497933    5552 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 19:04:25.497933    5552 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 19:04:25.498039    5552 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.203.42 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-736000 NodeName:ha-736000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.203.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.203.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 19:04:25.498238    5552 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.203.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-736000"
	  kubeletExtraArgs:
	    node-ip: 172.27.203.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.203.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 19:04:25.498238    5552 kube-vip.go:111] generating kube-vip config ...
	I0421 19:04:25.513247    5552 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 19:04:25.542566    5552 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 19:04:25.542566    5552 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0421 19:04:25.556663    5552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 19:04:25.575644    5552 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 19:04:25.590879    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0421 19:04:25.610083    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0421 19:04:25.650466    5552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:04:25.688032    5552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0421 19:04:25.724582    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0421 19:04:25.776336    5552 ssh_runner.go:195] Run: grep 172.27.207.254	control-plane.minikube.internal$ /etc/hosts
	I0421 19:04:25.784514    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:04:25.827921    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:04:26.058956    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:04:26.093274    5552 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000 for IP: 172.27.203.42
	I0421 19:04:26.093274    5552 certs.go:194] generating shared ca certs ...
	I0421 19:04:26.093274    5552 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.104669    5552 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 19:04:26.123493    5552 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 19:04:26.123562    5552 certs.go:256] generating profile certs ...
	I0421 19:04:26.124144    5552 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key
	I0421 19:04:26.124144    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.crt with IP's: []
	I0421 19:04:26.304906    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.crt ...
	I0421 19:04:26.304906    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.crt: {Name:mk864221f165ddb5f2d013dba1047c26a1e5485c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.304906    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key ...
	I0421 19:04:26.304906    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key: {Name:mk413de5828b08b138b88cdfe9e6974631020fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.307461    5552 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.da17cb40
	I0421 19:04:26.307834    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.da17cb40 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.203.42 172.27.207.254]
	I0421 19:04:26.439620    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.da17cb40 ...
	I0421 19:04:26.439620    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.da17cb40: {Name:mk12ca28fdb0696dcf7324d3690bc3cd0fb51930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.440832    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.da17cb40 ...
	I0421 19:04:26.440832    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.da17cb40: {Name:mk4e0ce450f4a7e20327c5c3823871a125afc773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.441979    5552 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.da17cb40 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt
	I0421 19:04:26.452434    5552 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.da17cb40 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key
	I0421 19:04:26.454433    5552 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key
	I0421 19:04:26.454797    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt with IP's: []
	I0421 19:04:26.654061    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt ...
	I0421 19:04:26.654061    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt: {Name:mk36ab8a1f5776f6510e50d2f510085260e82b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.655385    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key ...
	I0421 19:04:26.655385    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key: {Name:mk4cb5d6ed1625767c437cba204364341fbcf0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:26.656674    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 19:04:26.656674    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 19:04:26.656674    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 19:04:26.657377    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 19:04:26.657377    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 19:04:26.657377    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 19:04:26.657972    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 19:04:26.666469    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 19:04:26.667472    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 19:04:26.675604    5552 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 19:04:26.675604    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 19:04:26.675604    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 19:04:26.676314    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 19:04:26.676534    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 19:04:26.677110    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 19:04:26.677567    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 19:04:26.677567    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:04:26.677567    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 19:04:26.679536    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:04:26.732440    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:04:26.781676    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:04:26.834091    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:04:26.886039    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 19:04:26.945983    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0421 19:04:26.989297    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:04:27.042879    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 19:04:27.092744    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 19:04:27.147945    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:04:27.199609    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 19:04:27.253541    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 19:04:27.312603    5552 ssh_runner.go:195] Run: openssl version
	I0421 19:04:27.339955    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 19:04:27.376654    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 19:04:27.384522    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 19:04:27.398003    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 19:04:27.422031    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:04:27.460021    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:04:27.497253    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:04:27.505943    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:04:27.521167    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:04:27.546633    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:04:27.582900    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 19:04:27.623889    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 19:04:27.630916    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 19:04:27.646961    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 19:04:27.673140    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 19:04:27.708831    5552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:04:27.715534    5552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 19:04:27.715918    5552 kubeadm.go:391] StartCluster: {Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:d
efault APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:04:27.726804    5552 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 19:04:27.763294    5552 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 19:04:27.796784    5552 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 19:04:27.830565    5552 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 19:04:27.850821    5552 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 19:04:27.850890    5552 kubeadm.go:156] found existing configuration files:
	
	I0421 19:04:27.864368    5552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 19:04:27.886055    5552 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 19:04:27.901280    5552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 19:04:27.937988    5552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 19:04:27.958637    5552 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 19:04:27.973010    5552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 19:04:28.011309    5552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 19:04:28.041505    5552 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 19:04:28.056843    5552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 19:04:28.094003    5552 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 19:04:28.115624    5552 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 19:04:28.131210    5552 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 19:04:28.156280    5552 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 19:04:28.475739    5552 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 19:04:28.475895    5552 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 19:04:28.685695    5552 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 19:04:28.685786    5552 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 19:04:28.686172    5552 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 19:04:29.035161    5552 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 19:04:29.039754    5552 out.go:204]   - Generating certificates and keys ...
	I0421 19:04:29.039954    5552 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 19:04:29.040173    5552 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 19:04:29.842647    5552 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 19:04:30.030494    5552 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 19:04:30.142205    5552 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 19:04:30.752084    5552 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 19:04:30.997008    5552 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 19:04:30.997008    5552 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-736000 localhost] and IPs [172.27.203.42 127.0.0.1 ::1]
	I0421 19:04:31.192128    5552 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 19:04:31.192689    5552 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-736000 localhost] and IPs [172.27.203.42 127.0.0.1 ::1]
	I0421 19:04:31.354373    5552 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 19:04:31.455055    5552 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 19:04:31.599614    5552 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 19:04:31.599614    5552 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 19:04:31.781223    5552 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 19:04:31.913360    5552 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 19:04:32.063695    5552 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 19:04:32.405612    5552 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 19:04:32.787755    5552 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 19:04:32.788754    5552 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 19:04:32.792935    5552 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 19:04:32.797405    5552 out.go:204]   - Booting up control plane ...
	I0421 19:04:32.797405    5552 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 19:04:32.798902    5552 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 19:04:32.800001    5552 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 19:04:32.822977    5552 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 19:04:32.822977    5552 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 19:04:32.823983    5552 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 19:04:33.054305    5552 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 19:04:33.054496    5552 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 19:04:34.056012    5552 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002214916s
	I0421 19:04:34.056611    5552 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 19:04:42.946409    5552 kubeadm.go:309] [api-check] The API server is healthy after 8.889517146s
	I0421 19:04:42.967861    5552 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 19:04:43.010739    5552 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 19:04:43.102095    5552 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 19:04:43.102633    5552 kubeadm.go:309] [mark-control-plane] Marking the node ha-736000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 19:04:43.122719    5552 kubeadm.go:309] [bootstrap-token] Using token: 7gx0zq.bjmn3uvg7raru7d7
	I0421 19:04:43.127348    5552 out.go:204]   - Configuring RBAC rules ...
	I0421 19:04:43.127738    5552 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 19:04:43.141987    5552 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 19:04:43.159935    5552 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 19:04:43.167506    5552 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 19:04:43.177890    5552 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 19:04:43.193066    5552 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 19:04:43.361823    5552 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 19:04:43.852415    5552 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 19:04:44.359027    5552 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 19:04:44.360563    5552 kubeadm.go:309] 
	I0421 19:04:44.360563    5552 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 19:04:44.360563    5552 kubeadm.go:309] 
	I0421 19:04:44.360563    5552 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 19:04:44.360563    5552 kubeadm.go:309] 
	I0421 19:04:44.360563    5552 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 19:04:44.360563    5552 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 19:04:44.361205    5552 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 19:04:44.361390    5552 kubeadm.go:309] 
	I0421 19:04:44.361486    5552 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 19:04:44.361623    5552 kubeadm.go:309] 
	I0421 19:04:44.361623    5552 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 19:04:44.361623    5552 kubeadm.go:309] 
	I0421 19:04:44.361623    5552 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 19:04:44.361623    5552 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 19:04:44.361623    5552 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 19:04:44.361623    5552 kubeadm.go:309] 
	I0421 19:04:44.362163    5552 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 19:04:44.362637    5552 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 19:04:44.362637    5552 kubeadm.go:309] 
	I0421 19:04:44.362637    5552 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7gx0zq.bjmn3uvg7raru7d7 \
	I0421 19:04:44.362637    5552 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 \
	I0421 19:04:44.363206    5552 kubeadm.go:309] 	--control-plane 
	I0421 19:04:44.363317    5552 kubeadm.go:309] 
	I0421 19:04:44.363614    5552 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 19:04:44.363675    5552 kubeadm.go:309] 
	I0421 19:04:44.363830    5552 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7gx0zq.bjmn3uvg7raru7d7 \
	I0421 19:04:44.363830    5552 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 
	I0421 19:04:44.364960    5552 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 19:04:44.365046    5552 cni.go:84] Creating CNI manager for ""
	I0421 19:04:44.365046    5552 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 19:04:44.367207    5552 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0421 19:04:44.384305    5552 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 19:04:44.392708    5552 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 19:04:44.392708    5552 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0421 19:04:44.446584    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 19:04:45.103957    5552 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 19:04:45.118640    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-736000 minikube.k8s.io/updated_at=2024_04_21T19_04_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-736000 minikube.k8s.io/primary=true
	I0421 19:04:45.119636    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:45.131295    5552 ops.go:34] apiserver oom_adj: -16
	I0421 19:04:45.436046    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:45.945670    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:46.436109    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:46.935669    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:47.438082    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:47.938686    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:48.441995    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:48.942248    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:49.445045    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:49.941613    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:50.440983    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:50.942109    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:51.448280    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:51.949487    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:52.433831    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:52.936277    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:53.439401    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:53.944252    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:54.447347    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:54.947896    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:55.436395    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:55.938358    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:56.438762    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:56.941961    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:57.447496    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 19:04:57.564796    5552 kubeadm.go:1107] duration metric: took 12.4607504s to wait for elevateKubeSystemPrivileges
	W0421 19:04:57.564925    5552 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 19:04:57.564925    5552 kubeadm.go:393] duration metric: took 29.8487959s to StartCluster
	I0421 19:04:57.565034    5552 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:57.565143    5552 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:04:57.566997    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:04:57.568305    5552 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:04:57.568305    5552 start.go:240] waiting for startup goroutines ...
	I0421 19:04:57.568305    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 19:04:57.568305    5552 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 19:04:57.568305    5552 addons.go:69] Setting storage-provisioner=true in profile "ha-736000"
	I0421 19:04:57.568305    5552 addons.go:234] Setting addon storage-provisioner=true in "ha-736000"
	I0421 19:04:57.568848    5552 addons.go:69] Setting default-storageclass=true in profile "ha-736000"
	I0421 19:04:57.568949    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:04:57.568991    5552 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-736000"
	I0421 19:04:57.569282    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:04:57.569895    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:04:57.569895    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:04:57.785901    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 19:04:58.303516    5552 start.go:946] {"host.minikube.internal": 172.27.192.1} host record injected into CoreDNS's ConfigMap
	I0421 19:04:59.846430    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:04:59.846430    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:59.846430    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:04:59.846608    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:04:59.849412    5552 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 19:04:59.847523    5552 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:04:59.851925    5552 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:04:59.851925    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 19:04:59.851925    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:04:59.852569    5552 kapi.go:59] client config for ha-736000: &rest.Config{Host:"https://172.27.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 19:04:59.853826    5552 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 19:04:59.853826    5552 addons.go:234] Setting addon default-storageclass=true in "ha-736000"
	I0421 19:04:59.854406    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:04:59.855260    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:05:02.137208    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:02.137208    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:02.137208    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:05:02.272867    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:02.272867    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:02.273304    5552 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 19:05:02.273384    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 19:05:02.273442    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:05:04.567897    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:04.568903    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:04.568965    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:05:04.952740    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:05:04.953782    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:04.954443    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:05:05.104670    5552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 19:05:07.291443    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:05:07.292008    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:07.292336    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:05:07.429614    5552 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 19:05:07.626693    5552 round_trippers.go:463] GET https://172.27.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0421 19:05:07.626777    5552 round_trippers.go:469] Request Headers:
	I0421 19:05:07.626777    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:05:07.626777    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:05:07.640532    5552 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 19:05:07.642341    5552 round_trippers.go:463] PUT https://172.27.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0421 19:05:07.642410    5552 round_trippers.go:469] Request Headers:
	I0421 19:05:07.642410    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:05:07.642410    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:05:07.642410    5552 round_trippers.go:473]     Content-Type: application/json
	I0421 19:05:07.652563    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:05:07.658203    5552 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0421 19:05:07.660696    5552 addons.go:505] duration metric: took 10.0923197s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0421 19:05:07.660872    5552 start.go:245] waiting for cluster config update ...
	I0421 19:05:07.660872    5552 start.go:254] writing updated cluster config ...
	I0421 19:05:07.668354    5552 out.go:177] 
	I0421 19:05:07.675699    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:05:07.675699    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:05:07.681640    5552 out.go:177] * Starting "ha-736000-m02" control-plane node in "ha-736000" cluster
	I0421 19:05:07.685843    5552 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 19:05:07.686020    5552 cache.go:56] Caching tarball of preloaded images
	I0421 19:05:07.686125    5552 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 19:05:07.686125    5552 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 19:05:07.686827    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:05:07.692430    5552 start.go:360] acquireMachinesLock for ha-736000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:05:07.693007    5552 start.go:364] duration metric: took 576.6µs to acquireMachinesLock for "ha-736000-m02"
	I0421 19:05:07.693227    5552 start.go:93] Provisioning new machine with config: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName
:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:05:07.693512    5552 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0421 19:05:07.700324    5552 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 19:05:07.700734    5552 start.go:159] libmachine.API.Create for "ha-736000" (driver="hyperv")
	I0421 19:05:07.700794    5552 client.go:168] LocalClient.Create starting
	I0421 19:05:07.700944    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 19:05:07.701606    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:05:07.701606    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:05:07.701795    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 19:05:07.701795    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:05:07.702108    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:05:07.702336    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 19:05:09.713653    5552 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 19:05:09.713653    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:09.714391    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 19:05:11.563036    5552 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 19:05:11.563036    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:11.563280    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:05:13.141366    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:05:13.141366    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:13.142386    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:05:16.967624    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:05:16.968587    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:16.971314    5552 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 19:05:17.532703    5552 main.go:141] libmachine: Creating SSH key...
	I0421 19:05:17.749009    5552 main.go:141] libmachine: Creating VM...
	I0421 19:05:17.749009    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:05:20.796382    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:05:20.796382    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:20.796382    5552 main.go:141] libmachine: Using switch "Default Switch"
	I0421 19:05:20.796382    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:05:22.674093    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:05:22.674093    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:22.674188    5552 main.go:141] libmachine: Creating VHD
	I0421 19:05:22.674188    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 19:05:26.482813    5552 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5F01B524-1FF1-472D-8B06-C8BC95607249
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 19:05:26.482883    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:26.482883    5552 main.go:141] libmachine: Writing magic tar header
	I0421 19:05:26.482883    5552 main.go:141] libmachine: Writing SSH key tar header
	I0421 19:05:26.492850    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 19:05:29.724475    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:29.724999    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:29.724999    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\disk.vhd' -SizeBytes 20000MB
	I0421 19:05:32.306610    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:32.306956    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:32.307065    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-736000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0421 19:05:36.065155    5552 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-736000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 19:05:36.065155    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:36.065304    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-736000-m02 -DynamicMemoryEnabled $false
	I0421 19:05:38.380352    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:38.381339    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:38.381413    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-736000-m02 -Count 2
	I0421 19:05:40.595994    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:40.595994    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:40.596107    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-736000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\boot2docker.iso'
	I0421 19:05:43.242471    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:43.242471    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:43.243672    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-736000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\disk.vhd'
	I0421 19:05:46.003739    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:46.003739    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:46.003739    5552 main.go:141] libmachine: Starting VM...
	I0421 19:05:46.003739    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-736000-m02
	I0421 19:05:49.147947    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:49.148749    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:49.148749    5552 main.go:141] libmachine: Waiting for host to start...
	I0421 19:05:49.148855    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:05:51.414836    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:51.414836    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:51.414836    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:05:54.005405    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:54.005991    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:55.013311    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:05:57.215410    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:05:57.215644    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:05:57.215644    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:05:59.817941    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:05:59.818198    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:00.831587    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:03.065763    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:03.065763    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:03.065763    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:05.648098    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:06:05.648098    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:06.658381    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:08.881166    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:08.881166    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:08.881994    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:11.452229    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:06:11.452229    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:12.459363    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:14.705118    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:14.706109    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:14.706109    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:17.387537    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:17.387619    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:17.387619    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:19.581367    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:19.581688    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:19.581688    5552 machine.go:94] provisionDockerMachine start ...
	I0421 19:06:19.581883    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:21.786074    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:21.786615    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:21.786718    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:24.448538    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:24.448538    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:24.455528    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:24.455528    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:24.455528    5552 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:06:24.592817    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:06:24.592880    5552 buildroot.go:166] provisioning hostname "ha-736000-m02"
	I0421 19:06:24.592880    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:26.796991    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:26.796991    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:26.797338    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:29.483085    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:29.483085    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:29.490249    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:29.490316    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:29.490316    5552 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-736000-m02 && echo "ha-736000-m02" | sudo tee /etc/hostname
	I0421 19:06:29.650175    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-736000-m02
	
	I0421 19:06:29.650236    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:31.777015    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:31.777015    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:31.778063    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:34.386248    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:34.386248    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:34.392798    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:34.393530    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:34.393530    5552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-736000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-736000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-736000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:06:34.537154    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:06:34.537154    5552 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 19:06:34.537154    5552 buildroot.go:174] setting up certificates
	I0421 19:06:34.537154    5552 provision.go:84] configureAuth start
	I0421 19:06:34.537688    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:36.710366    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:36.710989    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:36.711049    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:39.342276    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:39.342543    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:39.342543    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:41.500093    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:41.500093    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:41.500338    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:44.108611    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:44.108675    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:44.108675    5552 provision.go:143] copyHostCerts
	I0421 19:06:44.108828    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 19:06:44.109340    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 19:06:44.109433    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 19:06:44.109944    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 19:06:44.111156    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 19:06:44.111156    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 19:06:44.111156    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 19:06:44.111945    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 19:06:44.113135    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 19:06:44.113481    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 19:06:44.113481    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 19:06:44.114064    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 19:06:44.115030    5552 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-736000-m02 san=[127.0.0.1 172.27.196.39 ha-736000-m02 localhost minikube]
	I0421 19:06:44.723267    5552 provision.go:177] copyRemoteCerts
	I0421 19:06:44.737676    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:06:44.737676    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:46.919584    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:46.919776    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:46.919854    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:49.497523    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:49.497523    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:49.498451    5552 sshutil.go:53] new ssh client: &{IP:172.27.196.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\id_rsa Username:docker}
	I0421 19:06:49.604625    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8669139s)
	I0421 19:06:49.604625    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 19:06:49.605647    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:06:49.656825    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 19:06:49.657362    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 19:06:49.705743    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 19:06:49.706254    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 19:06:49.759260    5552 provision.go:87] duration metric: took 15.2219978s to configureAuth
	I0421 19:06:49.759260    5552 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:06:49.760262    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:06:49.760262    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:51.944997    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:51.946153    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:51.946153    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:54.558952    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:54.559360    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:54.569702    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:54.569702    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:54.569702    5552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 19:06:54.702604    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 19:06:54.702712    5552 buildroot.go:70] root file system type: tmpfs
	I0421 19:06:54.703189    5552 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 19:06:54.703244    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:06:56.874158    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:06:56.875197    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:56.875231    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:06:59.477857    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:06:59.478134    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:06:59.484057    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:06:59.484465    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:06:59.484465    5552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.203.42"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 19:06:59.640807    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.203.42
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 19:06:59.640923    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:01.738300    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:01.738377    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:01.738471    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:04.327283    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:04.327747    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:04.335013    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:07:04.335147    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:07:04.335147    5552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 19:07:06.638246    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 19:07:06.638246    5552 machine.go:97] duration metric: took 47.0562236s to provisionDockerMachine
	I0421 19:07:06.638246    5552 client.go:171] duration metric: took 1m58.9366079s to LocalClient.Create
	I0421 19:07:06.638246    5552 start.go:167] duration metric: took 1m58.9366678s to libmachine.API.Create "ha-736000"
	I0421 19:07:06.638246    5552 start.go:293] postStartSetup for "ha-736000-m02" (driver="hyperv")
	I0421 19:07:06.638246    5552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:07:06.652103    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:07:06.652103    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:08.815691    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:08.815691    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:08.816547    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:11.433445    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:11.433445    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:11.434555    5552 sshutil.go:53] new ssh client: &{IP:172.27.196.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\id_rsa Username:docker}
	I0421 19:07:11.563623    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9114852s)
	I0421 19:07:11.578158    5552 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:07:11.587695    5552 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:07:11.587762    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 19:07:11.587817    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 19:07:11.588591    5552 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 19:07:11.588591    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 19:07:11.603708    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:07:11.622715    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 19:07:11.673716    5552 start.go:296] duration metric: took 5.0354338s for postStartSetup
	I0421 19:07:11.676704    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:13.848297    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:13.848297    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:13.848297    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:16.502813    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:16.503493    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:16.503493    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:07:16.506435    5552 start.go:128] duration metric: took 2m8.8116112s to createHost
	I0421 19:07:16.506569    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:18.673150    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:18.673150    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:18.673150    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:21.315039    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:21.315039    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:21.322073    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:07:21.322577    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:07:21.322652    5552 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:07:21.448006    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713726441.448751925
	
	I0421 19:07:21.448060    5552 fix.go:216] guest clock: 1713726441.448751925
	I0421 19:07:21.448060    5552 fix.go:229] Guest: 2024-04-21 19:07:21.448751925 +0000 UTC Remote: 2024-04-21 19:07:16.5065063 +0000 UTC m=+345.927139301 (delta=4.942245625s)
	I0421 19:07:21.448217    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:23.604764    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:23.604822    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:23.604822    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:26.269352    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:26.269352    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:26.277538    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:07:26.277937    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.196.39 22 <nil> <nil>}
	I0421 19:07:26.278031    5552 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713726441
	I0421 19:07:26.423452    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 19:07:21 UTC 2024
	
	I0421 19:07:26.423452    5552 fix.go:236] clock set: Sun Apr 21 19:07:21 UTC 2024
	 (err=<nil>)
	I0421 19:07:26.423452    5552 start.go:83] releasing machines lock for "ha-736000-m02", held for 2m18.7294005s
	I0421 19:07:26.424006    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:28.659593    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:28.659593    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:28.659593    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:31.307840    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:31.308137    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:31.311626    5552 out.go:177] * Found network options:
	I0421 19:07:31.314485    5552 out.go:177]   - NO_PROXY=172.27.203.42
	W0421 19:07:31.316813    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 19:07:31.318181    5552 out.go:177]   - NO_PROXY=172.27.203.42
	W0421 19:07:31.321514    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:07:31.322797    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 19:07:31.326109    5552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:07:31.326254    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:31.335806    5552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 19:07:31.336818    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m02 ).state
	I0421 19:07:33.558045    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:33.558679    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:36.334163    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:36.334163    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:36.334163    5552 sshutil.go:53] new ssh client: &{IP:172.27.196.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\id_rsa Username:docker}
	I0421 19:07:36.361767    5552 main.go:141] libmachine: [stdout =====>] : 172.27.196.39
	
	I0421 19:07:36.361767    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:36.362299    5552 sshutil.go:53] new ssh client: &{IP:172.27.196.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m02\id_rsa Username:docker}
	I0421 19:07:36.436278    5552 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1004359s)
	W0421 19:07:36.436278    5552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:07:36.452324    5552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:07:36.588566    5552 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.26236s)
	I0421 19:07:36.588566    5552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:07:36.588566    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:07:36.588566    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:07:36.646964    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 19:07:36.683642    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 19:07:36.707446    5552 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 19:07:36.722523    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 19:07:36.759314    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:07:36.795869    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 19:07:36.838559    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:07:36.874930    5552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:07:36.907895    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 19:07:36.939993    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 19:07:36.976624    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 19:07:37.016479    5552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:07:37.050022    5552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:07:37.083472    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:37.316947    5552 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 19:07:37.355903    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:07:37.370588    5552 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 19:07:37.419600    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:07:37.462311    5552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:07:37.510241    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:07:37.552195    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:07:37.594916    5552 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 19:07:37.677804    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:07:37.710754    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:07:37.765570    5552 ssh_runner.go:195] Run: which cri-dockerd
	I0421 19:07:37.785643    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 19:07:37.809359    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 19:07:37.862713    5552 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 19:07:38.095142    5552 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 19:07:38.308663    5552 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 19:07:38.308787    5552 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 19:07:38.360643    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:38.576226    5552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 19:07:41.149289    5552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5729449s)
	I0421 19:07:41.162923    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 19:07:41.204134    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:07:41.247069    5552 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 19:07:41.474152    5552 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 19:07:41.700145    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:41.938709    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 19:07:41.994817    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:07:42.039196    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:42.274676    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 19:07:42.394442    5552 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 19:07:42.408552    5552 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 19:07:42.418505    5552 start.go:562] Will wait 60s for crictl version
	I0421 19:07:42.431175    5552 ssh_runner.go:195] Run: which crictl
	I0421 19:07:42.455227    5552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:07:42.521470    5552 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 19:07:42.531089    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:07:42.584978    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:07:42.625863    5552 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 19:07:42.629432    5552 out.go:177]   - env NO_PROXY=172.27.203.42
	I0421 19:07:42.632347    5552 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 19:07:42.637683    5552 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 19:07:42.637828    5552 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 19:07:42.637828    5552 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 19:07:42.637828    5552 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 19:07:42.640156    5552 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 19:07:42.640156    5552 ip.go:210] interface addr: 172.27.192.1/20
	I0421 19:07:42.654525    5552 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 19:07:42.662580    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:07:42.691821    5552 mustload.go:65] Loading cluster: ha-736000
	I0421 19:07:42.692926    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:07:42.693879    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:07:44.821210    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:44.821334    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:44.821334    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:07:44.822096    5552 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000 for IP: 172.27.196.39
	I0421 19:07:44.822158    5552 certs.go:194] generating shared ca certs ...
	I0421 19:07:44.822158    5552 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:07:44.822742    5552 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 19:07:44.823056    5552 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 19:07:44.823568    5552 certs.go:256] generating profile certs ...
	I0421 19:07:44.823649    5552 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key
	I0421 19:07:44.824229    5552 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.fae335c4
	I0421 19:07:44.824395    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.fae335c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.203.42 172.27.196.39 172.27.207.254]
	I0421 19:07:44.941624    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.fae335c4 ...
	I0421 19:07:44.942650    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.fae335c4: {Name:mkdf65cadb4d3eb2882aecf91b5b8bc56bf5ae8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:07:44.943986    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.fae335c4 ...
	I0421 19:07:44.943986    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.fae335c4: {Name:mk34d9f61d951b75fdc47c93983e3d4605d204e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:07:44.945206    5552 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.fae335c4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt
	I0421 19:07:44.958259    5552 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.fae335c4 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key
	I0421 19:07:44.959574    5552 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key
	I0421 19:07:44.959574    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 19:07:44.960322    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 19:07:44.960488    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 19:07:44.960488    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 19:07:44.960488    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 19:07:44.960488    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 19:07:44.961063    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 19:07:44.961063    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 19:07:44.962262    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 19:07:44.962610    5552 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 19:07:44.962610    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 19:07:44.963146    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 19:07:44.963440    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 19:07:44.963440    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 19:07:44.964257    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 19:07:44.964792    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:07:44.965158    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 19:07:44.965445    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 19:07:44.965445    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:07:47.162508    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:47.163166    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:47.163363    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:07:49.801714    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:07:49.801714    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:49.803052    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:07:49.906294    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0421 19:07:49.917885    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0421 19:07:49.957383    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0421 19:07:49.965269    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0421 19:07:50.001820    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0421 19:07:50.010913    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0421 19:07:50.056768    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0421 19:07:50.066792    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0421 19:07:50.102645    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0421 19:07:50.111428    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0421 19:07:50.150080    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0421 19:07:50.158182    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0421 19:07:50.183985    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:07:50.243374    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:07:50.300927    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:07:50.354607    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:07:50.408821    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0421 19:07:50.460181    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 19:07:50.513660    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:07:50.565851    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 19:07:50.615176    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:07:50.665709    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 19:07:50.718432    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 19:07:50.786129    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0421 19:07:50.828055    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0421 19:07:50.865135    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0421 19:07:50.902570    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0421 19:07:50.936462    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0421 19:07:50.974347    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0421 19:07:51.010821    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0421 19:07:51.060232    5552 ssh_runner.go:195] Run: openssl version
	I0421 19:07:51.082771    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 19:07:51.115933    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 19:07:51.127269    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 19:07:51.141186    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 19:07:51.165620    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:07:51.199580    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:07:51.234434    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:07:51.242500    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:07:51.257489    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:07:51.282579    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:07:51.319272    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 19:07:51.355181    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 19:07:51.362677    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 19:07:51.376296    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 19:07:51.402030    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 19:07:51.438595    5552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:07:51.445422    5552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 19:07:51.445724    5552 kubeadm.go:928] updating node {m02 172.27.196.39 8443 v1.30.0 docker true true} ...
	I0421 19:07:51.445898    5552 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-736000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.196.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:07:51.445997    5552 kube-vip.go:111] generating kube-vip config ...
	I0421 19:07:51.459436    5552 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 19:07:51.485844    5552 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 19:07:51.485844    5552 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 19:07:51.499800    5552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 19:07:51.517753    5552 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 19:07:51.530708    5552 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 19:07:51.556041    5552 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0421 19:07:51.556041    5552 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0421 19:07:51.556041    5552 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0421 19:07:52.645563    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 19:07:52.658139    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 19:07:52.666388    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 19:07:52.666388    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 19:07:53.905135    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 19:07:53.918503    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 19:07:53.931495    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 19:07:53.931495    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 19:07:55.911839    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:07:55.956747    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 19:07:55.970368    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 19:07:55.977699    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 19:07:55.977699    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 19:07:56.548846    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0421 19:07:56.569746    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0421 19:07:56.605266    5552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:07:56.643646    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 19:07:56.693226    5552 ssh_runner.go:195] Run: grep 172.27.207.254	control-plane.minikube.internal$ /etc/hosts
	I0421 19:07:56.699513    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:07:56.738870    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:07:56.969031    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:07:57.001379    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:07:57.002238    5552 start.go:316] joinCluster: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:def
ault APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:07:57.002238    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 19:07:57.002238    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:07:59.191739    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:07:59.191790    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:07:59.191878    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:08:01.798434    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:08:01.798527    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:08:01.799273    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:08:02.026636    5552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0243628s)
	I0421 19:08:02.026783    5552 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:08:02.026783    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7v4r3f.vutef5no8emo2dip --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-736000-m02 --control-plane --apiserver-advertise-address=172.27.196.39 --apiserver-bind-port=8443"
	I0421 19:08:48.045692    5552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7v4r3f.vutef5no8emo2dip --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-736000-m02 --control-plane --apiserver-advertise-address=172.27.196.39 --apiserver-bind-port=8443": (46.0185824s)
	I0421 19:08:48.045692    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 19:08:48.978772    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-736000-m02 minikube.k8s.io/updated_at=2024_04_21T19_08_48_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-736000 minikube.k8s.io/primary=false
	I0421 19:08:49.166196    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-736000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0421 19:08:49.366213    5552 start.go:318] duration metric: took 52.3636038s to joinCluster
	I0421 19:08:49.366213    5552 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:08:49.368800    5552 out.go:177] * Verifying Kubernetes components...
	I0421 19:08:49.367155    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:08:49.385799    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:08:49.789078    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:08:49.827233    5552 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:08:49.827679    5552 kapi.go:59] client config for ha-736000: &rest.Config{Host:"https://172.27.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0421 19:08:49.827679    5552 kubeadm.go:477] Overriding stale ClientConfig host https://172.27.207.254:8443 with https://172.27.203.42:8443
	I0421 19:08:49.829140    5552 node_ready.go:35] waiting up to 6m0s for node "ha-736000-m02" to be "Ready" ...
	I0421 19:08:49.829229    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:49.829229    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:49.829229    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:49.829229    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:50.031957    5552 round_trippers.go:574] Response Status: 200 OK in 202 milliseconds
	I0421 19:08:50.342143    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:50.342143    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:50.342346    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:50.342346    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:50.349814    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:08:50.831519    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:50.831519    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:50.831519    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:50.831519    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:50.846138    5552 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0421 19:08:51.339997    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:51.340068    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:51.340068    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:51.340191    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:51.352481    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:08:51.835457    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:51.835482    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:51.835482    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:51.835547    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:51.919707    5552 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0421 19:08:51.920478    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:08:52.329986    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:52.329986    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:52.329986    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:52.329986    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:52.339183    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:08:52.835745    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:52.835937    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:52.835937    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:52.835937    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:52.842273    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:53.341644    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:53.341644    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:53.341644    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:53.341644    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:53.348272    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:53.830823    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:53.830823    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:53.830927    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:53.830927    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:53.836159    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:08:54.337773    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:54.337773    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:54.337844    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:54.337844    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:54.348038    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:08:54.348038    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:08:54.831725    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:54.831789    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:54.831789    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:54.831789    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:54.838348    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:55.339083    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:55.339152    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:55.339152    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:55.339152    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:55.346007    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:55.843105    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:55.843189    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:55.843189    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:55.843189    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:55.847772    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:08:56.336772    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:56.336772    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:56.336772    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:56.336772    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:56.367727    5552 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0421 19:08:56.368591    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:08:56.830154    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:56.830154    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:56.830154    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:56.830154    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:56.836127    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:08:57.334411    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:57.334411    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:57.334411    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:57.334411    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:57.340100    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:08:57.836385    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:57.836446    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:57.836446    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:57.836446    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:57.840051    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:08:58.339679    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:58.339679    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:58.339679    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:58.339679    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:58.345693    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:58.841742    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:58.841742    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:58.841742    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:58.841742    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:58.846369    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:08:58.847923    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:08:59.329920    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:59.329920    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:59.329920    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:59.329920    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:59.336052    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:08:59.832521    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:08:59.832521    5552 round_trippers.go:469] Request Headers:
	I0421 19:08:59.832521    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:08:59.832521    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:08:59.842320    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:09:00.340634    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:00.340634    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:00.340634    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:00.340634    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:00.350330    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:09:00.844105    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:00.844105    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:00.844193    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:00.844193    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:00.850035    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:00.851128    5552 node_ready.go:53] node "ha-736000-m02" has status "Ready":"False"
	I0421 19:09:01.342610    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:01.342610    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:01.342610    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:01.342610    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:01.350347    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:09:01.845385    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:01.845385    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:01.845385    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:01.845385    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:01.850976    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:02.344082    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:02.344351    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.344351    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.344351    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.350625    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:09:02.833408    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:02.833408    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.833408    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.833408    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.839273    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:02.840475    5552 node_ready.go:49] node "ha-736000-m02" has status "Ready":"True"
	I0421 19:09:02.840475    5552 node_ready.go:38] duration metric: took 13.0112424s for node "ha-736000-m02" to be "Ready" ...
	I0421 19:09:02.840475    5552 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:09:02.840475    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:02.840475    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.840475    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.840475    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.848775    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:09:02.859503    5552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.859503    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bp9zb
	I0421 19:09:02.859503    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.859503    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.859503    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.864623    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:02.865304    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:02.865304    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.865304    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.865304    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.870780    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:02.872197    5552 pod_ready.go:92] pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:02.872197    5552 pod_ready.go:81] duration metric: took 12.6937ms for pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.872197    5552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.872197    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kv8pq
	I0421 19:09:02.872197    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.872197    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.872197    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.876797    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:02.877728    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:02.877728    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.877728    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.877728    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.881532    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:09:02.883322    5552 pod_ready.go:92] pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:02.883322    5552 pod_ready.go:81] duration metric: took 11.125ms for pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.883322    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.883487    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000
	I0421 19:09:02.883525    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.883525    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.883525    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.887259    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:09:02.887693    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:02.887693    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.887693    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.887693    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.891290    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:09:02.892353    5552 pod_ready.go:92] pod "etcd-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:02.892353    5552 pod_ready.go:81] duration metric: took 9.0314ms for pod "etcd-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.892353    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.892353    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m02
	I0421 19:09:02.892353    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.892353    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.892353    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.896947    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:02.897786    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:02.897840    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:02.897840    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:02.897840    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:02.901695    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:09:02.901695    5552 pod_ready.go:92] pod "etcd-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:02.902252    5552 pod_ready.go:81] duration metric: took 9.8989ms for pod "etcd-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:02.902252    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:03.035609    5552 request.go:629] Waited for 133.1012ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000
	I0421 19:09:03.035730    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000
	I0421 19:09:03.035730    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.035730    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.035730    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.044977    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:09:03.238175    5552 request.go:629] Waited for 192.1819ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:03.238246    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:03.238363    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.238363    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.238363    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.249757    5552 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 19:09:03.252372    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:03.252450    5552 pod_ready.go:81] duration metric: took 350.1952ms for pod "kube-apiserver-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:03.252450    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:03.439349    5552 request.go:629] Waited for 186.7242ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:03.439406    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:03.439406    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.439406    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.439406    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.445042    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:03.644939    5552 request.go:629] Waited for 198.5088ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:03.645070    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:03.645070    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.645144    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.645144    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.656131    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:09:03.834311    5552 request.go:629] Waited for 79.0088ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:03.834389    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:03.834517    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:03.834551    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:03.834551    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:03.843765    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:09:04.038887    5552 request.go:629] Waited for 194.0672ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:04.038887    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:04.039024    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.039024    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.039024    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.047797    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:09:04.260861    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:09:04.260861    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.260861    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.260861    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.268120    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:09:04.446480    5552 request.go:629] Waited for 177.5413ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:04.446985    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:04.447145    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.447145    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.447145    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.452758    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:04.454964    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:04.455055    5552 pod_ready.go:81] duration metric: took 1.2025962s for pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:04.455055    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:04.634271    5552 request.go:629] Waited for 178.9704ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000
	I0421 19:09:04.634435    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000
	I0421 19:09:04.634435    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.634435    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.634435    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.640100    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:04.837422    5552 request.go:629] Waited for 195.1589ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:04.837517    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:04.837517    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:04.837517    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:04.837586    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:04.844244    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:09:04.844509    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:04.844509    5552 pod_ready.go:81] duration metric: took 389.4512ms for pod "kube-controller-manager-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:04.844509    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.040959    5552 request.go:629] Waited for 195.707ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m02
	I0421 19:09:05.041046    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m02
	I0421 19:09:05.041142    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.041142    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.041142    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.046912    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:05.243266    5552 request.go:629] Waited for 194.6967ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:05.243630    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:05.243693    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.243693    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.243785    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.251178    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:09:05.251688    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:05.251688    5552 pod_ready.go:81] duration metric: took 407.1763ms for pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.251688    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqs5h" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.445374    5552 request.go:629] Waited for 193.4376ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqs5h
	I0421 19:09:05.445550    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqs5h
	I0421 19:09:05.445550    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.445550    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.445550    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.451123    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:05.634732    5552 request.go:629] Waited for 181.4822ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:05.634976    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:05.634976    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.634976    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.634976    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.642786    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:09:05.643544    5552 pod_ready.go:92] pod "kube-proxy-pqs5h" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:05.643544    5552 pod_ready.go:81] duration metric: took 391.8532ms for pod "kube-proxy-pqs5h" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.643544    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tj6tp" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:05.840725    5552 request.go:629] Waited for 196.5598ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tj6tp
	I0421 19:09:05.840725    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tj6tp
	I0421 19:09:05.840725    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:05.840725    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:05.840725    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:05.847322    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:09:06.046061    5552 request.go:629] Waited for 196.7965ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:06.046218    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:06.046218    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.046218    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.046218    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.059009    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:09:06.060105    5552 pod_ready.go:92] pod "kube-proxy-tj6tp" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:06.060105    5552 pod_ready.go:81] duration metric: took 416.558ms for pod "kube-proxy-tj6tp" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.060105    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.248192    5552 request.go:629] Waited for 187.6363ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000
	I0421 19:09:06.248379    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000
	I0421 19:09:06.248379    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.248379    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.248379    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.253924    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:06.437246    5552 request.go:629] Waited for 182.1142ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:06.437488    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:09:06.437593    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.437593    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.437593    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.444408    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:09:06.445160    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:06.445160    5552 pod_ready.go:81] duration metric: took 385.0526ms for pod "kube-scheduler-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.445160    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.640457    5552 request.go:629] Waited for 195.1153ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m02
	I0421 19:09:06.640570    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m02
	I0421 19:09:06.640694    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.640694    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.640694    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.645059    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:06.845581    5552 request.go:629] Waited for 199.3304ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:06.846013    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:09:06.846013    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.846013    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.846013    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.851818    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:06.853287    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:09:06.853481    5552 pod_ready.go:81] duration metric: took 408.3183ms for pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:09:06.853481    5552 pod_ready.go:38] duration metric: took 4.0129783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:09:06.853560    5552 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:09:06.867247    5552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:09:06.900431    5552 api_server.go:72] duration metric: took 17.5340933s to wait for apiserver process to appear ...
	I0421 19:09:06.900431    5552 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:09:06.900431    5552 api_server.go:253] Checking apiserver healthz at https://172.27.203.42:8443/healthz ...
	I0421 19:09:06.914814    5552 api_server.go:279] https://172.27.203.42:8443/healthz returned 200:
	ok
	I0421 19:09:06.914942    5552 round_trippers.go:463] GET https://172.27.203.42:8443/version
	I0421 19:09:06.915055    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:06.915055    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:06.915055    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:06.916125    5552 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 19:09:06.916775    5552 api_server.go:141] control plane version: v1.30.0
	I0421 19:09:06.916775    5552 api_server.go:131] duration metric: took 16.3439ms to wait for apiserver health ...
	I0421 19:09:06.916775    5552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:09:07.034319    5552 request.go:629] Waited for 116.6747ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:07.034523    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:07.034523    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:07.034523    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:07.034523    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:07.045464    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:09:07.052980    5552 system_pods.go:59] 17 kube-system pods found
	I0421 19:09:07.052980    5552 system_pods.go:61] "coredns-7db6d8ff4d-bp9zb" [7da3275b-72ab-4de1-b829-77f0cd23dc23] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "coredns-7db6d8ff4d-kv8pq" [d4be5bff-5bb6-4ddf-8eff-ccc817c9ad36] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "etcd-ha-736000" [06a6dbb9-b777-4098-aa92-6df6049b0503] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "etcd-ha-736000-m02" [6c50f3fb-a82f-43ff-80f7-43d070388780] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kindnet-7j6mw" [830e1e63-992c-4e19-9aff-4d0f3be10d13] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kindnet-wwkr9" [29feeeec-236b-4fe2-bd95-2df787f4851a] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-apiserver-ha-736000" [6355d12a-d611-44f5-9bfe-725d2ee23d0d] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-apiserver-ha-736000-m02" [4531a2aa-6aff-4d31-a1b4-55837f998ba0] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-controller-manager-ha-736000" [622072f3-a063-476a-a7b2-45f44ebb0d06] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-controller-manager-ha-736000-m02" [24d7539e-0271-4f8a-8fb6-d8a4e19eacb1] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-proxy-pqs5h" [45dbccc2-56ec-4ef9-9daf-1b47ebf39c42] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-proxy-tj6tp" [47ae652e-af60-4a64-a218-3d2fadfe7b0f] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-scheduler-ha-736000" [3980d4e4-3d76-4b51-9da9-9d7b270b6f26] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-scheduler-ha-736000-m02" [4b8301ab-82ac-4660-9f54-41c8ccd7149f] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-vip-ha-736000" [10075b63-2468-4380-9528-ad61710a05ec] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "kube-vip-ha-736000-m02" [0577cb35-8275-4c27-a8eb-8176c4e198bb] Running
	I0421 19:09:07.052980    5552 system_pods.go:61] "storage-provisioner" [f51763ff-64ac-4583-86bf-32ec3dfcb438] Running
	I0421 19:09:07.052980    5552 system_pods.go:74] duration metric: took 135.4772ms to wait for pod list to return data ...
	I0421 19:09:07.052980    5552 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:09:07.235401    5552 request.go:629] Waited for 181.5265ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/default/serviceaccounts
	I0421 19:09:07.235401    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/default/serviceaccounts
	I0421 19:09:07.235401    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:07.235401    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:07.235401    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:07.240026    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:09:07.241670    5552 default_sa.go:45] found service account: "default"
	I0421 19:09:07.241778    5552 default_sa.go:55] duration metric: took 188.6887ms for default service account to be created ...
	I0421 19:09:07.241778    5552 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:09:07.437016    5552 request.go:629] Waited for 194.9712ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:07.437154    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:09:07.437154    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:07.437154    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:07.437154    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:07.445479    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:09:07.457581    5552 system_pods.go:86] 17 kube-system pods found
	I0421 19:09:07.457581    5552 system_pods.go:89] "coredns-7db6d8ff4d-bp9zb" [7da3275b-72ab-4de1-b829-77f0cd23dc23] Running
	I0421 19:09:07.457581    5552 system_pods.go:89] "coredns-7db6d8ff4d-kv8pq" [d4be5bff-5bb6-4ddf-8eff-ccc817c9ad36] Running
	I0421 19:09:07.457581    5552 system_pods.go:89] "etcd-ha-736000" [06a6dbb9-b777-4098-aa92-6df6049b0503] Running
	I0421 19:09:07.457581    5552 system_pods.go:89] "etcd-ha-736000-m02" [6c50f3fb-a82f-43ff-80f7-43d070388780] Running
	I0421 19:09:07.458147    5552 system_pods.go:89] "kindnet-7j6mw" [830e1e63-992c-4e19-9aff-4d0f3be10d13] Running
	I0421 19:09:07.458147    5552 system_pods.go:89] "kindnet-wwkr9" [29feeeec-236b-4fe2-bd95-2df787f4851a] Running
	I0421 19:09:07.458147    5552 system_pods.go:89] "kube-apiserver-ha-736000" [6355d12a-d611-44f5-9bfe-725d2ee23d0d] Running
	I0421 19:09:07.458147    5552 system_pods.go:89] "kube-apiserver-ha-736000-m02" [4531a2aa-6aff-4d31-a1b4-55837f998ba0] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-controller-manager-ha-736000" [622072f3-a063-476a-a7b2-45f44ebb0d06] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-controller-manager-ha-736000-m02" [24d7539e-0271-4f8a-8fb6-d8a4e19eacb1] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-proxy-pqs5h" [45dbccc2-56ec-4ef9-9daf-1b47ebf39c42] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-proxy-tj6tp" [47ae652e-af60-4a64-a218-3d2fadfe7b0f] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-scheduler-ha-736000" [3980d4e4-3d76-4b51-9da9-9d7b270b6f26] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-scheduler-ha-736000-m02" [4b8301ab-82ac-4660-9f54-41c8ccd7149f] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-vip-ha-736000" [10075b63-2468-4380-9528-ad61710a05ec] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "kube-vip-ha-736000-m02" [0577cb35-8275-4c27-a8eb-8176c4e198bb] Running
	I0421 19:09:07.458191    5552 system_pods.go:89] "storage-provisioner" [f51763ff-64ac-4583-86bf-32ec3dfcb438] Running
	I0421 19:09:07.458191    5552 system_pods.go:126] duration metric: took 216.4118ms to wait for k8s-apps to be running ...
	I0421 19:09:07.458191    5552 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:09:07.469788    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:09:07.499346    5552 system_svc.go:56] duration metric: took 41.1551ms WaitForService to wait for kubelet
	I0421 19:09:07.499456    5552 kubeadm.go:576] duration metric: took 18.1331143s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:09:07.499456    5552 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:09:07.640605    5552 request.go:629] Waited for 140.798ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes
	I0421 19:09:07.640837    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes
	I0421 19:09:07.640978    5552 round_trippers.go:469] Request Headers:
	I0421 19:09:07.640978    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:09:07.640978    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:09:07.646683    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:09:07.648783    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:09:07.648843    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:09:07.648900    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:09:07.648900    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:09:07.648951    5552 node_conditions.go:105] duration metric: took 149.3519ms to run NodePressure ...
	I0421 19:09:07.648967    5552 start.go:240] waiting for startup goroutines ...
	I0421 19:09:07.649022    5552 start.go:254] writing updated cluster config ...
	I0421 19:09:07.654683    5552 out.go:177] 
	I0421 19:09:07.663488    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:09:07.663488    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:09:07.670471    5552 out.go:177] * Starting "ha-736000-m03" control-plane node in "ha-736000" cluster
	I0421 19:09:07.675220    5552 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 19:09:07.675789    5552 cache.go:56] Caching tarball of preloaded images
	I0421 19:09:07.676487    5552 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 19:09:07.676612    5552 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 19:09:07.676924    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:09:07.682467    5552 start.go:360] acquireMachinesLock for ha-736000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 19:09:07.682467    5552 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-736000-m03"
	I0421 19:09:07.682467    5552 start.go:93] Provisioning new machine with config: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName
:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:09:07.682467    5552 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0421 19:09:07.689258    5552 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 19:09:07.689258    5552 start.go:159] libmachine.API.Create for "ha-736000" (driver="hyperv")
	I0421 19:09:07.690055    5552 client.go:168] LocalClient.Create starting
	I0421 19:09:07.690288    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 19:09:07.690710    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:09:07.690710    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:09:07.690865    5552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 19:09:07.691046    5552 main.go:141] libmachine: Decoding PEM data...
	I0421 19:09:07.691046    5552 main.go:141] libmachine: Parsing certificate...
	I0421 19:09:07.691257    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 19:09:09.715867    5552 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 19:09:09.715867    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:09.715867    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 19:09:11.537427    5552 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 19:09:11.538202    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:11.538202    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:09:13.145780    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:09:13.145931    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:13.146025    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:09:17.015319    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:09:17.015319    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:17.017916    5552 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 19:09:17.520622    5552 main.go:141] libmachine: Creating SSH key...
	I0421 19:09:17.857683    5552 main.go:141] libmachine: Creating VM...
	I0421 19:09:17.857683    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 19:09:20.902904    5552 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 19:09:20.902904    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:20.903000    5552 main.go:141] libmachine: Using switch "Default Switch"
	I0421 19:09:20.903130    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 19:09:22.772522    5552 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 19:09:22.772940    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:22.772940    5552 main.go:141] libmachine: Creating VHD
	I0421 19:09:22.773052    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 19:09:26.603968    5552 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 719F816D-DDCC-4E80-AF20-44DA9C0C1AFB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 19:09:26.604246    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:26.604246    5552 main.go:141] libmachine: Writing magic tar header
	I0421 19:09:26.604246    5552 main.go:141] libmachine: Writing SSH key tar header
	I0421 19:09:26.614990    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 19:09:29.878652    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:29.878652    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:29.878985    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\disk.vhd' -SizeBytes 20000MB
	I0421 19:09:32.490078    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:32.490078    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:32.490206    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-736000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0421 19:09:36.338740    5552 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-736000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 19:09:36.339198    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:36.339258    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-736000-m03 -DynamicMemoryEnabled $false
	I0421 19:09:38.653966    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:38.654735    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:38.654818    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-736000-m03 -Count 2
	I0421 19:09:40.919100    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:40.919100    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:40.919299    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-736000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\boot2docker.iso'
	I0421 19:09:43.596704    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:43.597311    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:43.597311    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-736000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\disk.vhd'
	I0421 19:09:46.374581    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:46.374581    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:46.374581    5552 main.go:141] libmachine: Starting VM...
	I0421 19:09:46.375685    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-736000-m03
	I0421 19:09:49.554310    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:49.554310    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:49.554310    5552 main.go:141] libmachine: Waiting for host to start...
	I0421 19:09:49.554310    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:09:51.854256    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:09:51.854256    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:51.854610    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:09:54.456906    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:09:54.456906    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:55.459580    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:09:57.712745    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:09:57.712745    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:09:57.712908    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:00.340712    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:10:00.340788    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:01.347944    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:03.589647    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:03.589647    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:03.589647    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:06.203081    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:10:06.203081    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:07.215456    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:09.479605    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:09.479869    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:09.479869    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:12.094230    5552 main.go:141] libmachine: [stdout =====>] : 
	I0421 19:10:12.094230    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:13.095579    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:15.376631    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:15.376631    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:15.377548    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:18.058043    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:18.058043    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:18.058515    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:20.255704    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:20.256398    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:20.256398    5552 machine.go:94] provisionDockerMachine start ...
	I0421 19:10:20.256508    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:22.475061    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:22.475061    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:22.475211    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:25.193979    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:25.193979    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:25.201194    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:10:25.213916    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:10:25.215063    5552 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 19:10:25.350574    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 19:10:25.350665    5552 buildroot.go:166] provisioning hostname "ha-736000-m03"
	I0421 19:10:25.350665    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:27.553312    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:27.553312    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:27.553312    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:30.226083    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:30.226083    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:30.232709    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:10:30.233525    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:10:30.233525    5552 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-736000-m03 && echo "ha-736000-m03" | sudo tee /etc/hostname
	I0421 19:10:30.411718    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-736000-m03
	
	I0421 19:10:30.411718    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:32.597092    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:32.597092    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:32.597092    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:35.269924    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:35.269985    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:35.275801    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:10:35.276885    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:10:35.276885    5552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-736000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-736000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-736000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 19:10:35.443614    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 19:10:35.443682    5552 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 19:10:35.443745    5552 buildroot.go:174] setting up certificates
	I0421 19:10:35.443745    5552 provision.go:84] configureAuth start
	I0421 19:10:35.443869    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:37.618718    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:37.618718    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:37.618718    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:40.296423    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:40.296423    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:40.297281    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:42.457901    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:42.458272    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:42.458321    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:45.140113    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:45.140396    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:45.140423    5552 provision.go:143] copyHostCerts
	I0421 19:10:45.140795    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 19:10:45.141129    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 19:10:45.141129    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 19:10:45.141555    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 19:10:45.142855    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 19:10:45.142922    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 19:10:45.142922    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 19:10:45.143449    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 19:10:45.144189    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 19:10:45.144786    5552 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 19:10:45.144853    5552 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 19:10:45.145232    5552 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 19:10:45.146156    5552 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-736000-m03 san=[127.0.0.1 172.27.195.51 ha-736000-m03 localhost minikube]
	I0421 19:10:45.512049    5552 provision.go:177] copyRemoteCerts
	I0421 19:10:45.528591    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 19:10:45.528591    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:47.755151    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:47.755151    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:47.755796    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:50.447906    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:50.448664    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:50.448664    5552 sshutil.go:53] new ssh client: &{IP:172.27.195.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\id_rsa Username:docker}
	I0421 19:10:50.566846    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0380979s)
	I0421 19:10:50.566846    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 19:10:50.567323    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 19:10:50.623794    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 19:10:50.624360    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0421 19:10:50.678245    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 19:10:50.679252    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 19:10:50.731910    5552 provision.go:87] duration metric: took 15.2879941s to configureAuth
	I0421 19:10:50.732075    5552 buildroot.go:189] setting minikube options for container-runtime
	I0421 19:10:50.733051    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:10:50.733165    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:52.942844    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:52.942844    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:52.943730    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:10:55.648643    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:10:55.649658    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:55.656699    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:10:55.657252    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:10:55.657252    5552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 19:10:55.803533    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 19:10:55.803637    5552 buildroot.go:70] root file system type: tmpfs
	I0421 19:10:55.803878    5552 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 19:10:55.803908    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:10:57.973601    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:10:57.974138    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:10:57.974269    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:00.742768    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:00.742768    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:00.750624    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:11:00.751180    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:11:00.751476    5552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.203.42"
	Environment="NO_PROXY=172.27.203.42,172.27.196.39"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 19:11:00.936519    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.203.42
	Environment=NO_PROXY=172.27.203.42,172.27.196.39
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 19:11:00.936615    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:03.148859    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:03.148859    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:03.148963    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:05.765285    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:05.766016    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:05.772474    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:11:05.773143    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:11:05.773143    5552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 19:11:08.031174    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 19:11:08.031174    5552 machine.go:97] duration metric: took 47.7744363s to provisionDockerMachine
	I0421 19:11:08.031174    5552 client.go:171] duration metric: took 2m0.3402639s to LocalClient.Create
	I0421 19:11:08.031174    5552 start.go:167] duration metric: took 2m0.3410612s to libmachine.API.Create "ha-736000"
	I0421 19:11:08.031174    5552 start.go:293] postStartSetup for "ha-736000-m03" (driver="hyperv")
	I0421 19:11:08.031174    5552 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 19:11:08.044153    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 19:11:08.044153    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:10.241010    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:10.241010    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:10.241283    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:12.875614    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:12.875614    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:12.876058    5552 sshutil.go:53] new ssh client: &{IP:172.27.195.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\id_rsa Username:docker}
	I0421 19:11:12.984483    5552 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9402947s)
	I0421 19:11:12.997723    5552 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 19:11:13.005159    5552 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 19:11:13.005242    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 19:11:13.005738    5552 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 19:11:13.006589    5552 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 19:11:13.006705    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 19:11:13.021178    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 19:11:13.051289    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 19:11:13.107445    5552 start.go:296] duration metric: took 5.0762352s for postStartSetup
	I0421 19:11:13.110523    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:15.311678    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:15.311806    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:15.311915    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:17.964950    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:17.964950    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:17.965454    5552 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\config.json ...
	I0421 19:11:17.968973    5552 start.go:128] duration metric: took 2m10.285581s to createHost
	I0421 19:11:17.969054    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:20.174976    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:20.175409    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:20.175566    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:22.811708    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:22.811708    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:22.818750    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:11:22.819282    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:11:22.819401    5552 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 19:11:22.953325    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713726682.960438090
	
	I0421 19:11:22.953325    5552 fix.go:216] guest clock: 1713726682.960438090
	I0421 19:11:22.953325    5552 fix.go:229] Guest: 2024-04-21 19:11:22.96043809 +0000 UTC Remote: 2024-04-21 19:11:17.9690544 +0000 UTC m=+587.387973001 (delta=4.99138369s)
	I0421 19:11:22.953325    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:25.175577    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:25.175894    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:25.175894    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:27.902734    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:27.903335    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:27.909663    5552 main.go:141] libmachine: Using SSH client type: native
	I0421 19:11:27.910362    5552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.51 22 <nil> <nil>}
	I0421 19:11:27.910541    5552 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713726682
	I0421 19:11:28.059587    5552 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 19:11:22 UTC 2024
	
	I0421 19:11:28.059587    5552 fix.go:236] clock set: Sun Apr 21 19:11:22 UTC 2024
	 (err=<nil>)
	I0421 19:11:28.059715    5552 start.go:83] releasing machines lock for "ha-736000-m03", held for 2m20.3762513s
	I0421 19:11:28.059944    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:30.242853    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:30.242853    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:30.242853    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:32.852757    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:32.853232    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:32.856146    5552 out.go:177] * Found network options:
	I0421 19:11:32.859005    5552 out.go:177]   - NO_PROXY=172.27.203.42,172.27.196.39
	W0421 19:11:32.862359    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:11:32.862500    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 19:11:32.864926    5552 out.go:177]   - NO_PROXY=172.27.203.42,172.27.196.39
	W0421 19:11:32.868634    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:11:32.868634    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:11:32.870335    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 19:11:32.870335    5552 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 19:11:32.873339    5552 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 19:11:32.873339    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:32.884816    5552 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 19:11:32.885675    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000-m03 ).state
	I0421 19:11:35.100582    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:35.100582    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:35.100699    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:35.101361    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:35.101438    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:35.101536    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000-m03 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:37.870410    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:37.870493    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:37.871028    5552 sshutil.go:53] new ssh client: &{IP:172.27.195.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\id_rsa Username:docker}
	I0421 19:11:37.902765    5552 main.go:141] libmachine: [stdout =====>] : 172.27.195.51
	
	I0421 19:11:37.903771    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:37.904279    5552 sshutil.go:53] new ssh client: &{IP:172.27.195.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000-m03\id_rsa Username:docker}
	I0421 19:11:38.105993    5552 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2202817s)
	W0421 19:11:38.106113    5552 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 19:11:38.106113    5552 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2327367s)
	I0421 19:11:38.119476    5552 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 19:11:38.154867    5552 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 19:11:38.154947    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:11:38.155218    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:11:38.211111    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 19:11:38.250532    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 19:11:38.273007    5552 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 19:11:38.287474    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 19:11:38.326937    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:11:38.367033    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 19:11:38.404653    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 19:11:38.441342    5552 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 19:11:38.477943    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 19:11:38.514150    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 19:11:38.551546    5552 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 19:11:38.587476    5552 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 19:11:38.624901    5552 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 19:11:38.661292    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:38.895286    5552 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 19:11:38.935311    5552 start.go:494] detecting cgroup driver to use...
	I0421 19:11:38.950317    5552 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 19:11:38.990529    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:11:39.035183    5552 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 19:11:39.081689    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 19:11:39.122799    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:11:39.168883    5552 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 19:11:39.245304    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 19:11:39.276709    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 19:11:39.339393    5552 ssh_runner.go:195] Run: which cri-dockerd
	I0421 19:11:39.364856    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 19:11:39.394447    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 19:11:39.457803    5552 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 19:11:39.689555    5552 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 19:11:39.922031    5552 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 19:11:39.922096    5552 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 19:11:39.977343    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:40.223560    5552 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 19:11:42.828318    5552 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6047395s)
	I0421 19:11:42.841570    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 19:11:42.884778    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:11:42.923470    5552 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 19:11:43.151931    5552 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 19:11:43.395400    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:43.628473    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 19:11:43.679911    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 19:11:43.721368    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:43.959012    5552 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 19:11:44.093650    5552 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 19:11:44.108208    5552 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 19:11:44.119740    5552 start.go:562] Will wait 60s for crictl version
	I0421 19:11:44.132875    5552 ssh_runner.go:195] Run: which crictl
	I0421 19:11:44.156967    5552 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 19:11:44.222612    5552 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 19:11:44.234163    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:11:44.282497    5552 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 19:11:44.324197    5552 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 19:11:44.327748    5552 out.go:177]   - env NO_PROXY=172.27.203.42
	I0421 19:11:44.330372    5552 out.go:177]   - env NO_PROXY=172.27.203.42,172.27.196.39
	I0421 19:11:44.334106    5552 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 19:11:44.339969    5552 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 19:11:44.339969    5552 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 19:11:44.339969    5552 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 19:11:44.339969    5552 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 19:11:44.343570    5552 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 19:11:44.343570    5552 ip.go:210] interface addr: 172.27.192.1/20
	I0421 19:11:44.359491    5552 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 19:11:44.366910    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:11:44.395045    5552 mustload.go:65] Loading cluster: ha-736000
	I0421 19:11:44.395815    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:11:44.396046    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:11:46.575258    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:46.575258    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:46.575258    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:11:46.576252    5552 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000 for IP: 172.27.195.51
	I0421 19:11:46.576252    5552 certs.go:194] generating shared ca certs ...
	I0421 19:11:46.576252    5552 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:11:46.577136    5552 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 19:11:46.577452    5552 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 19:11:46.577635    5552 certs.go:256] generating profile certs ...
	I0421 19:11:46.578307    5552 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\client.key
	I0421 19:11:46.578486    5552 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.d68ed736
	I0421 19:11:46.578486    5552 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.d68ed736 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.203.42 172.27.196.39 172.27.195.51 172.27.207.254]
	I0421 19:11:47.001958    5552 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.d68ed736 ...
	I0421 19:11:47.001958    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.d68ed736: {Name:mka7cd24961d014aa09bdc5f5ea7b50c20452ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:11:47.002980    5552 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.d68ed736 ...
	I0421 19:11:47.002980    5552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.d68ed736: {Name:mk3ecac3bc96e5743192beddc441181563013b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 19:11:47.003644    5552 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt.d68ed736 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt
	I0421 19:11:47.015695    5552 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key.d68ed736 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key
	I0421 19:11:47.016729    5552 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key
	I0421 19:11:47.016729    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 19:11:47.017746    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 19:11:47.018186    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 19:11:47.018353    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 19:11:47.018389    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 19:11:47.018624    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 19:11:47.018825    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 19:11:47.018825    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 19:11:47.019926    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 19:11:47.020199    5552 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 19:11:47.020199    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 19:11:47.020631    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 19:11:47.020902    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 19:11:47.020902    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 19:11:47.021633    5552 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 19:11:47.021894    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 19:11:47.022085    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:11:47.022353    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 19:11:47.022607    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:11:49.241421    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:49.241421    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:49.242471    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:11:51.909383    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:11:51.909383    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:51.911090    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:11:52.020886    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0421 19:11:52.029003    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0421 19:11:52.069530    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0421 19:11:52.077201    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0421 19:11:52.115362    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0421 19:11:52.124317    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0421 19:11:52.162603    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0421 19:11:52.170344    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0421 19:11:52.207334    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0421 19:11:52.216824    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0421 19:11:52.254184    5552 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0421 19:11:52.263523    5552 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0421 19:11:52.288347    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 19:11:52.340680    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 19:11:52.395838    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 19:11:52.452113    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 19:11:52.507456    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0421 19:11:52.559742    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 19:11:52.609854    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 19:11:52.664891    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-736000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0421 19:11:52.718848    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 19:11:52.770633    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 19:11:52.823237    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 19:11:52.876196    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0421 19:11:52.914240    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0421 19:11:52.950347    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0421 19:11:52.988724    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0421 19:11:53.023476    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0421 19:11:53.060608    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0421 19:11:53.095210    5552 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0421 19:11:53.143909    5552 ssh_runner.go:195] Run: openssl version
	I0421 19:11:53.168848    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 19:11:53.209760    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 19:11:53.217324    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 19:11:53.231603    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 19:11:53.254133    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 19:11:53.294240    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 19:11:53.330979    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:11:53.339089    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:11:53.352501    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 19:11:53.378315    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 19:11:53.415652    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 19:11:53.450034    5552 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 19:11:53.459324    5552 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 19:11:53.472193    5552 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 19:11:53.497702    5552 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 19:11:53.536413    5552 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 19:11:53.544339    5552 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 19:11:53.544650    5552 kubeadm.go:928] updating node {m03 172.27.195.51 8443 v1.30.0 docker true true} ...
	I0421 19:11:53.544837    5552 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-736000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.195.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:default APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 19:11:53.545039    5552 kube-vip.go:111] generating kube-vip config ...
	I0421 19:11:53.559213    5552 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0421 19:11:53.589768    5552 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0421 19:11:53.589768    5552 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.27.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0421 19:11:53.604365    5552 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 19:11:53.623473    5552 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 19:11:53.638073    5552 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 19:11:53.658698    5552 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0421 19:11:53.658698    5552 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0421 19:11:53.658698    5552 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0421 19:11:53.658698    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 19:11:53.658698    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 19:11:53.676822    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 19:11:53.678035    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 19:11:53.678035    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:11:53.686698    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 19:11:53.686698    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 19:11:53.686698    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 19:11:53.686698    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 19:11:53.755518    5552 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 19:11:53.770932    5552 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 19:11:53.924544    5552 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 19:11:53.924625    5552 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 19:11:55.095766    5552 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0421 19:11:55.124160    5552 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0421 19:11:55.158961    5552 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 19:11:55.199801    5552 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0421 19:11:55.254112    5552 ssh_runner.go:195] Run: grep 172.27.207.254	control-plane.minikube.internal$ /etc/hosts
	I0421 19:11:55.261824    5552 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 19:11:55.299305    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:11:55.537065    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:11:55.572810    5552 host.go:66] Checking if "ha-736000" exists ...
	I0421 19:11:55.574017    5552 start.go:316] joinCluster: &{Name:ha-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-736000 Namespace:def
ault APIServerHAVIP:172.27.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.203.42 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.196.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.27.195.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false
istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 19:11:55.574262    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 19:11:55.574262    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-736000 ).state
	I0421 19:11:57.760251    5552 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 19:11:57.760251    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:11:57.760979    5552 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-736000 ).networkadapters[0]).ipaddresses[0]
	I0421 19:12:00.427029    5552 main.go:141] libmachine: [stdout =====>] : 172.27.203.42
	
	I0421 19:12:00.427472    5552 main.go:141] libmachine: [stderr =====>] : 
	I0421 19:12:00.427972    5552 sshutil.go:53] new ssh client: &{IP:172.27.203.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-736000\id_rsa Username:docker}
	I0421 19:12:00.652693    5552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0783946s)
	I0421 19:12:00.652819    5552 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.27.195.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:12:00.652930    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 836nuw.84ejy2nbaoe6fjph --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-736000-m03 --control-plane --apiserver-advertise-address=172.27.195.51 --apiserver-bind-port=8443"
	I0421 19:12:48.252845    5552 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 836nuw.84ejy2nbaoe6fjph --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-736000-m03 --control-plane --apiserver-advertise-address=172.27.195.51 --apiserver-bind-port=8443": (47.5995821s)
	I0421 19:12:48.252845    5552 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 19:12:49.111006    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-736000-m03 minikube.k8s.io/updated_at=2024_04_21T19_12_49_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=ha-736000 minikube.k8s.io/primary=false
	I0421 19:12:49.307192    5552 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-736000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0421 19:12:49.471180    5552 start.go:318] duration metric: took 53.8967854s to joinCluster
	I0421 19:12:49.471536    5552 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.27.195.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 19:12:49.474053    5552 out.go:177] * Verifying Kubernetes components...
	I0421 19:12:49.472358    5552 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 19:12:49.490050    5552 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 19:12:49.922744    5552 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 19:12:49.971201    5552 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 19:12:49.972657    5552 kapi.go:59] client config for ha-736000: &rest.Config{Host:"https://172.27.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-736000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0421 19:12:49.972817    5552 kubeadm.go:477] Overriding stale ClientConfig host https://172.27.207.254:8443 with https://172.27.203.42:8443
	I0421 19:12:49.973841    5552 node_ready.go:35] waiting up to 6m0s for node "ha-736000-m03" to be "Ready" ...
	I0421 19:12:49.973841    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:49.973841    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:49.973841    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:49.973841    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:49.990836    5552 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0421 19:12:50.489315    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:50.489372    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:50.489372    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:50.489372    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:50.494222    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:12:50.976768    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:50.976829    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:50.976829    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:50.976829    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:50.989134    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:12:51.486240    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:51.486240    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:51.486240    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:51.486240    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:51.491617    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:51.977680    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:51.977680    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:51.977680    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:51.977680    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:51.982923    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:51.983451    5552 node_ready.go:53] node "ha-736000-m03" has status "Ready":"False"
	I0421 19:12:52.485213    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:52.485213    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:52.485213    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:52.485213    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:52.498727    5552 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 19:12:52.988785    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:52.988785    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:52.988785    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:52.988785    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:52.993792    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:53.478712    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:53.478778    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:53.478778    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:53.478847    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:53.484323    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:53.983803    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:53.983869    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:53.983869    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:53.983869    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:53.989486    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:53.990334    5552 node_ready.go:53] node "ha-736000-m03" has status "Ready":"False"
	I0421 19:12:54.474842    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:54.474965    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:54.474965    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:54.474965    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:54.481555    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:12:54.981375    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:54.981484    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:54.981553    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:54.981553    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:54.991205    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:12:55.483348    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:55.483348    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:55.483417    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:55.483417    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:55.487857    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:12:55.984741    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:55.984741    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:55.984741    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:55.984741    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:55.994447    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:12:55.994623    5552 node_ready.go:53] node "ha-736000-m03" has status "Ready":"False"
	I0421 19:12:56.478132    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:56.478241    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:56.478241    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:56.478342    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:56.495069    5552 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0421 19:12:56.980617    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:56.980617    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:56.980617    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:56.980617    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:56.987054    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:12:57.481427    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:57.481427    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:57.481427    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:57.481427    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:57.489933    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:12:57.985396    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:57.985396    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:57.985396    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:57.985396    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:57.989923    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:12:58.488507    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:58.488507    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:58.488507    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:58.488507    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:58.501162    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:12:58.502807    5552 node_ready.go:53] node "ha-736000-m03" has status "Ready":"False"
	I0421 19:12:58.977764    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:58.977852    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:58.977852    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:58.977852    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:58.983757    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.478805    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:59.478805    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.478805    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.478805    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.491472    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:12:59.492248    5552 node_ready.go:49] node "ha-736000-m03" has status "Ready":"True"
	I0421 19:12:59.492326    5552 node_ready.go:38] duration metric: took 9.5184179s for node "ha-736000-m03" to be "Ready" ...
	I0421 19:12:59.492326    5552 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:12:59.492442    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:12:59.492442    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.492442    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.492528    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.507377    5552 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0421 19:12:59.519864    5552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.519864    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bp9zb
	I0421 19:12:59.519864    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.519864    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.519864    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.525411    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.526602    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:12:59.526602    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.526602    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.526602    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.531509    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:12:59.532525    5552 pod_ready.go:92] pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace has status "Ready":"True"
	I0421 19:12:59.532568    5552 pod_ready.go:81] duration metric: took 12.7037ms for pod "coredns-7db6d8ff4d-bp9zb" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.532568    5552 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.532631    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kv8pq
	I0421 19:12:59.532761    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.532761    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.532761    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.541119    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:12:59.541119    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:12:59.541119    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.541119    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.541119    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.547083    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.548211    5552 pod_ready.go:92] pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace has status "Ready":"True"
	I0421 19:12:59.548273    5552 pod_ready.go:81] duration metric: took 15.7053ms for pod "coredns-7db6d8ff4d-kv8pq" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.548273    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.548453    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000
	I0421 19:12:59.548453    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.548453    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.548453    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.551066    5552 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 19:12:59.552038    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:12:59.552038    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.552038    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.552038    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.557583    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.558178    5552 pod_ready.go:92] pod "etcd-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:12:59.558178    5552 pod_ready.go:81] duration metric: took 9.9053ms for pod "etcd-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.558178    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.558178    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m02
	I0421 19:12:59.558178    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.558178    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.558178    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.563831    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:12:59.564707    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:12:59.564797    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.564797    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.564797    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.577576    5552 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 19:12:59.578555    5552 pod_ready.go:92] pod "etcd-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:12:59.578555    5552 pod_ready.go:81] duration metric: took 20.3762ms for pod "etcd-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.578555    5552 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:12:59.681260    5552 request.go:629] Waited for 102.5901ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:12:59.681594    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:12:59.681594    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.681594    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.681594    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.688848    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:12:59.887618    5552 request.go:629] Waited for 196.7779ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:59.887735    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:12:59.887735    5552 round_trippers.go:469] Request Headers:
	I0421 19:12:59.887735    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:12:59.887735    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:12:59.894363    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:13:00.090292    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:00.090292    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:00.090543    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:00.090543    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:00.095604    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:00.292791    5552 request.go:629] Waited for 194.7248ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:00.292877    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:00.292877    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:00.292877    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:00.292877    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:00.297232    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:00.590265    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:00.590479    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:00.590479    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:00.590479    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:00.594781    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:00.684871    5552 request.go:629] Waited for 88.5672ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:00.684871    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:00.684871    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:00.685114    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:00.685114    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:00.694992    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:13:01.091329    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:01.091550    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:01.091550    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:01.091550    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:01.096226    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:01.098312    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:01.098428    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:01.098428    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:01.098428    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:01.108435    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:13:01.591021    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:01.591021    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:01.591021    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:01.591021    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:01.595968    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:01.597007    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:01.597007    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:01.597007    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:01.597007    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:01.601270    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:01.602595    5552 pod_ready.go:102] pod "etcd-ha-736000-m03" in "kube-system" namespace has status "Ready":"False"
	I0421 19:13:02.092054    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/etcd-ha-736000-m03
	I0421 19:13:02.092109    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.092109    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.092109    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.098066    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:02.099269    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:02.099348    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.099348    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.099348    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.103372    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:13:02.103372    5552 pod_ready.go:92] pod "etcd-ha-736000-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:02.103946    5552 pod_ready.go:81] duration metric: took 2.5247994s for pod "etcd-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.103997    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.104110    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000
	I0421 19:13:02.104110    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.104167    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.104167    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.107532    5552 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 19:13:02.280133    5552 request.go:629] Waited for 170.4314ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:02.280133    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:02.280133    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.280133    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.280133    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.284853    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:02.286537    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:02.286624    5552 pod_ready.go:81] duration metric: took 182.626ms for pod "kube-apiserver-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.286624    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.482588    5552 request.go:629] Waited for 195.7346ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:13:02.482760    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m02
	I0421 19:13:02.482814    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.482814    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.482843    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.487127    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:02.685101    5552 request.go:629] Waited for 196.0591ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:02.685339    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:02.685339    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.685339    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.685339    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.689393    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:02.691472    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:02.691531    5552 pod_ready.go:81] duration metric: took 404.9042ms for pod "kube-apiserver-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.691531    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:02.891246    5552 request.go:629] Waited for 199.4436ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m03
	I0421 19:13:02.891349    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-736000-m03
	I0421 19:13:02.891349    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:02.891349    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:02.891349    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:02.899722    5552 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 19:13:03.079508    5552 request.go:629] Waited for 178.9394ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:03.079508    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:03.079791    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.079791    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.079791    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.084133    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:03.084133    5552 pod_ready.go:92] pod "kube-apiserver-ha-736000-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:03.084133    5552 pod_ready.go:81] duration metric: took 392.5985ms for pod "kube-apiserver-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.084133    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.281183    5552 request.go:629] Waited for 196.7875ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000
	I0421 19:13:03.281580    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000
	I0421 19:13:03.281626    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.281688    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.281688    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.287629    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:03.486814    5552 request.go:629] Waited for 197.3532ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:03.487058    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:03.487103    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.487103    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.487103    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.492259    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:03.493654    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:03.493654    5552 pod_ready.go:81] duration metric: took 409.5181ms for pod "kube-controller-manager-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.493654    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.689687    5552 request.go:629] Waited for 194.9778ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m02
	I0421 19:13:03.689861    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m02
	I0421 19:13:03.689861    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.689914    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.689914    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.694919    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:03.891348    5552 request.go:629] Waited for 194.3314ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:03.891576    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:03.891576    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:03.891691    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:03.891691    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:03.898622    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:13:03.899515    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:03.899515    5552 pod_ready.go:81] duration metric: took 405.8587ms for pod "kube-controller-manager-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:03.899515    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.080715    5552 request.go:629] Waited for 181.0088ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m03
	I0421 19:13:04.080715    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-736000-m03
	I0421 19:13:04.080715    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.080715    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.080715    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.086682    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:04.285954    5552 request.go:629] Waited for 198.318ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:04.286252    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:04.286379    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.286379    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.286379    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.293773    5552 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 19:13:04.294646    5552 pod_ready.go:92] pod "kube-controller-manager-ha-736000-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:04.294646    5552 pod_ready.go:81] duration metric: took 395.1285ms for pod "kube-controller-manager-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.294646    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blktz" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.487477    5552 request.go:629] Waited for 192.0745ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-blktz
	I0421 19:13:04.487640    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-blktz
	I0421 19:13:04.487640    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.487640    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.487640    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.492946    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:04.691108    5552 request.go:629] Waited for 196.2906ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:04.691349    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:04.691349    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.691349    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.691527    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.697080    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:04.697943    5552 pod_ready.go:92] pod "kube-proxy-blktz" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:04.697943    5552 pod_ready.go:81] duration metric: took 403.2938ms for pod "kube-proxy-blktz" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.697943    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqs5h" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:04.879325    5552 request.go:629] Waited for 181.2322ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqs5h
	I0421 19:13:04.879706    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pqs5h
	I0421 19:13:04.879706    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:04.879706    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:04.879706    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:04.885603    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:05.083361    5552 request.go:629] Waited for 196.4332ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:05.083480    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:05.083536    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.083575    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.083575    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.088972    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:05.090315    5552 pod_ready.go:92] pod "kube-proxy-pqs5h" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:05.090315    5552 pod_ready.go:81] duration metric: took 392.3693ms for pod "kube-proxy-pqs5h" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.090904    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tj6tp" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.286010    5552 request.go:629] Waited for 194.9558ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tj6tp
	I0421 19:13:05.286288    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tj6tp
	I0421 19:13:05.286414    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.286414    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.286414    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.291166    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:05.489639    5552 request.go:629] Waited for 197.2532ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:05.490038    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:05.490038    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.490104    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.490134    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.494644    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:05.496101    5552 pod_ready.go:92] pod "kube-proxy-tj6tp" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:05.496201    5552 pod_ready.go:81] duration metric: took 405.2948ms for pod "kube-proxy-tj6tp" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.496201    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.694346    5552 request.go:629] Waited for 198.0483ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000
	I0421 19:13:05.694658    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000
	I0421 19:13:05.694658    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.694658    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.694658    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.704810    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:13:05.883545    5552 request.go:629] Waited for 177.5476ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:05.883545    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000
	I0421 19:13:05.883860    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:05.883860    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:05.883860    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:05.893193    5552 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 19:13:05.894494    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:05.894610    5552 pod_ready.go:81] duration metric: took 398.406ms for pod "kube-scheduler-ha-736000" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:05.894610    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:06.088172    5552 request.go:629] Waited for 193.0961ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m02
	I0421 19:13:06.088353    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m02
	I0421 19:13:06.088353    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.088418    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.088630    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.093788    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:06.289812    5552 request.go:629] Waited for 194.2555ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:06.289812    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m02
	I0421 19:13:06.289812    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.289812    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.289812    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.294394    5552 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 19:13:06.295766    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:06.295766    5552 pod_ready.go:81] duration metric: took 401.0413ms for pod "kube-scheduler-ha-736000-m02" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:06.295766    5552 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:06.493994    5552 request.go:629] Waited for 198.2264ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m03
	I0421 19:13:06.494563    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-736000-m03
	I0421 19:13:06.494563    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.494563    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.494563    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.500123    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:06.679504    5552 request.go:629] Waited for 178.3582ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:06.679823    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes/ha-736000-m03
	I0421 19:13:06.679823    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.679957    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.679957    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.686603    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:13:06.687802    5552 pod_ready.go:92] pod "kube-scheduler-ha-736000-m03" in "kube-system" namespace has status "Ready":"True"
	I0421 19:13:06.687802    5552 pod_ready.go:81] duration metric: took 392.0332ms for pod "kube-scheduler-ha-736000-m03" in "kube-system" namespace to be "Ready" ...
	I0421 19:13:06.687802    5552 pod_ready.go:38] duration metric: took 7.1954258s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 19:13:06.687802    5552 api_server.go:52] waiting for apiserver process to appear ...
	I0421 19:13:06.702651    5552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 19:13:06.735599    5552 api_server.go:72] duration metric: took 17.2639418s to wait for apiserver process to appear ...
	I0421 19:13:06.735706    5552 api_server.go:88] waiting for apiserver healthz status ...
	I0421 19:13:06.735706    5552 api_server.go:253] Checking apiserver healthz at https://172.27.203.42:8443/healthz ...
	I0421 19:13:06.745366    5552 api_server.go:279] https://172.27.203.42:8443/healthz returned 200:
	ok
	I0421 19:13:06.745366    5552 round_trippers.go:463] GET https://172.27.203.42:8443/version
	I0421 19:13:06.745366    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.745366    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.745366    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.747464    5552 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 19:13:06.748125    5552 api_server.go:141] control plane version: v1.30.0
	I0421 19:13:06.748190    5552 api_server.go:131] duration metric: took 12.4843ms to wait for apiserver health ...
	I0421 19:13:06.748244    5552 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 19:13:06.882701    5552 request.go:629] Waited for 134.1595ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:13:06.882822    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:13:06.882822    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:06.882822    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:06.883018    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:06.893728    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:13:06.905783    5552 system_pods.go:59] 24 kube-system pods found
	I0421 19:13:06.905783    5552 system_pods.go:61] "coredns-7db6d8ff4d-bp9zb" [7da3275b-72ab-4de1-b829-77f0cd23dc23] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "coredns-7db6d8ff4d-kv8pq" [d4be5bff-5bb6-4ddf-8eff-ccc817c9ad36] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "etcd-ha-736000" [06a6dbb9-b777-4098-aa92-6df6049b0503] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "etcd-ha-736000-m02" [6c50f3fb-a82f-43ff-80f7-43d070388780] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "etcd-ha-736000-m03" [4b774b33-bf9e-450a-8b4a-0b6146e19ce9] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kindnet-7j6mw" [830e1e63-992c-4e19-9aff-4d0f3be10d13] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kindnet-hcfln" [56443347-dfaf-443f-9014-e19cb654b235] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kindnet-wwkr9" [29feeeec-236b-4fe2-bd95-2df787f4851a] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-apiserver-ha-736000" [6355d12a-d611-44f5-9bfe-725d2ee23d0d] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-apiserver-ha-736000-m02" [4531a2aa-6aff-4d31-a1b4-55837f998ba0] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-apiserver-ha-736000-m03" [06d38aa2-774f-4276-915a-2b28029132e2] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-controller-manager-ha-736000" [622072f3-a063-476a-a7b2-45f44ebb0d06] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-controller-manager-ha-736000-m02" [24d7539e-0271-4f8a-8fb6-d8a4e19eacb1] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-controller-manager-ha-736000-m03" [ca1a34ce-37d8-4066-b411-6ada78b6741d] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-proxy-blktz" [bbad68d6-1ee4-4c58-8cdc-aa091eec6a90] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-proxy-pqs5h" [45dbccc2-56ec-4ef9-9daf-1b47ebf39c42] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-proxy-tj6tp" [47ae652e-af60-4a64-a218-3d2fadfe7b0f] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-scheduler-ha-736000" [3980d4e4-3d76-4b51-9da9-9d7b270b6f26] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-scheduler-ha-736000-m02" [4b8301ab-82ac-4660-9f54-41c8ccd7149f] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-scheduler-ha-736000-m03" [57c9bb2f-dbf6-489e-a2ad-686b5cdbb090] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-vip-ha-736000" [10075b63-2468-4380-9528-ad61710a05ec] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-vip-ha-736000-m02" [0577cb35-8275-4c27-a8eb-8176c4e198bb] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "kube-vip-ha-736000-m03" [59d91112-5b6a-486a-bc8f-f3613243482d] Running
	I0421 19:13:06.905783    5552 system_pods.go:61] "storage-provisioner" [f51763ff-64ac-4583-86bf-32ec3dfcb438] Running
	I0421 19:13:06.905783    5552 system_pods.go:74] duration metric: took 157.5377ms to wait for pod list to return data ...
	I0421 19:13:06.905783    5552 default_sa.go:34] waiting for default service account to be created ...
	I0421 19:13:07.086904    5552 request.go:629] Waited for 181.119ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/default/serviceaccounts
	I0421 19:13:07.086904    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/default/serviceaccounts
	I0421 19:13:07.086904    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:07.086904    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:07.086904    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:07.093514    5552 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 19:13:07.093514    5552 default_sa.go:45] found service account: "default"
	I0421 19:13:07.093514    5552 default_sa.go:55] duration metric: took 187.7289ms for default service account to be created ...
	I0421 19:13:07.093514    5552 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 19:13:07.291722    5552 request.go:629] Waited for 198.0046ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:13:07.291839    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/namespaces/kube-system/pods
	I0421 19:13:07.291839    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:07.291839    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:07.291839    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:07.302350    5552 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 19:13:07.313365    5552 system_pods.go:86] 24 kube-system pods found
	I0421 19:13:07.313365    5552 system_pods.go:89] "coredns-7db6d8ff4d-bp9zb" [7da3275b-72ab-4de1-b829-77f0cd23dc23] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "coredns-7db6d8ff4d-kv8pq" [d4be5bff-5bb6-4ddf-8eff-ccc817c9ad36] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "etcd-ha-736000" [06a6dbb9-b777-4098-aa92-6df6049b0503] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "etcd-ha-736000-m02" [6c50f3fb-a82f-43ff-80f7-43d070388780] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "etcd-ha-736000-m03" [4b774b33-bf9e-450a-8b4a-0b6146e19ce9] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kindnet-7j6mw" [830e1e63-992c-4e19-9aff-4d0f3be10d13] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kindnet-hcfln" [56443347-dfaf-443f-9014-e19cb654b235] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kindnet-wwkr9" [29feeeec-236b-4fe2-bd95-2df787f4851a] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-apiserver-ha-736000" [6355d12a-d611-44f5-9bfe-725d2ee23d0d] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-apiserver-ha-736000-m02" [4531a2aa-6aff-4d31-a1b4-55837f998ba0] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-apiserver-ha-736000-m03" [06d38aa2-774f-4276-915a-2b28029132e2] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-controller-manager-ha-736000" [622072f3-a063-476a-a7b2-45f44ebb0d06] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-controller-manager-ha-736000-m02" [24d7539e-0271-4f8a-8fb6-d8a4e19eacb1] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-controller-manager-ha-736000-m03" [ca1a34ce-37d8-4066-b411-6ada78b6741d] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-proxy-blktz" [bbad68d6-1ee4-4c58-8cdc-aa091eec6a90] Running
	I0421 19:13:07.313365    5552 system_pods.go:89] "kube-proxy-pqs5h" [45dbccc2-56ec-4ef9-9daf-1b47ebf39c42] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-proxy-tj6tp" [47ae652e-af60-4a64-a218-3d2fadfe7b0f] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-scheduler-ha-736000" [3980d4e4-3d76-4b51-9da9-9d7b270b6f26] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-scheduler-ha-736000-m02" [4b8301ab-82ac-4660-9f54-41c8ccd7149f] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-scheduler-ha-736000-m03" [57c9bb2f-dbf6-489e-a2ad-686b5cdbb090] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-vip-ha-736000" [10075b63-2468-4380-9528-ad61710a05ec] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-vip-ha-736000-m02" [0577cb35-8275-4c27-a8eb-8176c4e198bb] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "kube-vip-ha-736000-m03" [59d91112-5b6a-486a-bc8f-f3613243482d] Running
	I0421 19:13:07.314302    5552 system_pods.go:89] "storage-provisioner" [f51763ff-64ac-4583-86bf-32ec3dfcb438] Running
	I0421 19:13:07.314302    5552 system_pods.go:126] duration metric: took 220.7873ms to wait for k8s-apps to be running ...
	I0421 19:13:07.314302    5552 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 19:13:07.329090    5552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 19:13:07.357772    5552 system_svc.go:56] duration metric: took 43.4688ms WaitForService to wait for kubelet
	I0421 19:13:07.357772    5552 kubeadm.go:576] duration metric: took 17.8861097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 19:13:07.357772    5552 node_conditions.go:102] verifying NodePressure condition ...
	I0421 19:13:07.492688    5552 request.go:629] Waited for 134.7695ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.203.42:8443/api/v1/nodes
	I0421 19:13:07.492688    5552 round_trippers.go:463] GET https://172.27.203.42:8443/api/v1/nodes
	I0421 19:13:07.492688    5552 round_trippers.go:469] Request Headers:
	I0421 19:13:07.492688    5552 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 19:13:07.492892    5552 round_trippers.go:473]     Accept: application/json, */*
	I0421 19:13:07.498219    5552 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 19:13:07.499208    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:13:07.499208    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:13:07.499208    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:13:07.499208    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:13:07.499208    5552 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 19:13:07.499208    5552 node_conditions.go:123] node cpu capacity is 2
	I0421 19:13:07.499208    5552 node_conditions.go:105] duration metric: took 141.4353ms to run NodePressure ...
	I0421 19:13:07.499208    5552 start.go:240] waiting for startup goroutines ...
	I0421 19:13:07.499208    5552 start.go:254] writing updated cluster config ...
	I0421 19:13:07.515631    5552 ssh_runner.go:195] Run: rm -f paused
	I0421 19:13:07.678623    5552 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 19:13:07.682726    5552 out.go:177] * Done! kubectl is now configured to use "ha-736000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.427443016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.427485017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.427645918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.473721147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.474085049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.474365550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:05:12 ha-736000 dockerd[1331]: time="2024-04-21T19:05:12.474693152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:13:46 ha-736000 dockerd[1331]: time="2024-04-21T19:13:46.661229653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 19:13:46 ha-736000 dockerd[1331]: time="2024-04-21T19:13:46.661462254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:13:46 ha-736000 dockerd[1331]: time="2024-04-21T19:13:46.661504754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:13:46 ha-736000 dockerd[1331]: time="2024-04-21T19:13:46.661832954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:13:46 ha-736000 cri-dockerd[1228]: time="2024-04-21T19:13:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/acdf86c89c3e8c324af41a4f457b43e522eda33e2414ccc223e67a72e3a12553/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 21 19:13:48 ha-736000 cri-dockerd[1228]: time="2024-04-21T19:13:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 21 19:13:48 ha-736000 dockerd[1331]: time="2024-04-21T19:13:48.506673734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 19:13:48 ha-736000 dockerd[1331]: time="2024-04-21T19:13:48.506767035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 19:13:48 ha-736000 dockerd[1331]: time="2024-04-21T19:13:48.506783235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:13:48 ha-736000 dockerd[1331]: time="2024-04-21T19:13:48.506913736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 19:14:53 ha-736000 dockerd[1325]: 2024/04/21 19:14:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 21 19:14:53 ha-736000 dockerd[1325]: 2024/04/21 19:14:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 21 19:14:53 ha-736000 dockerd[1325]: 2024/04/21 19:14:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 21 19:14:53 ha-736000 dockerd[1325]: 2024/04/21 19:14:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 21 19:14:54 ha-736000 dockerd[1325]: 2024/04/21 19:14:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 21 19:14:54 ha-736000 dockerd[1325]: 2024/04/21 19:14:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 21 19:14:54 ha-736000 dockerd[1325]: 2024/04/21 19:14:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 21 19:14:54 ha-736000 dockerd[1325]: 2024/04/21 19:14:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2c8dc2e2ae84d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   acdf86c89c3e8       busybox-fc5497c4f-pnbbn
	6c62393114dc7       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   141288f6eefae       coredns-7db6d8ff4d-kv8pq
	638e6b90760c8       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   bc63af3f3c46b       coredns-7db6d8ff4d-bp9zb
	8fc14347dc613       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   46ca7ef6a5269       storage-provisioner
	67806b4246ae6       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Running             kindnet-cni               0                   7c65bef05022a       kindnet-wwkr9
	a9cc5bf6a42d5       a0bf559e280cf                                                                                         26 minutes ago      Running             kube-proxy                0                   6f60e71384698       kube-proxy-pqs5h
	c922d4fe4beb4       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     27 minutes ago      Running             kube-vip                  0                   12f9d02462845       kube-vip-ha-736000
	256d65336b19e       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   9b7895f7d7345       etcd-ha-736000
	2b4f4a1077366       c42f13656d0b2                                                                                         27 minutes ago      Running             kube-apiserver            0                   e7717a3630e7c       kube-apiserver-ha-736000
	ee3dd828038f3       c7aad43836fa5                                                                                         27 minutes ago      Running             kube-controller-manager   0                   0c7f2f1bde060       kube-controller-manager-ha-736000
	c4e32eeddc5d0       259c8277fcbbc                                                                                         27 minutes ago      Running             kube-scheduler            0                   6821588bdfb91       kube-scheduler-ha-736000
	
	
	==> coredns [638e6b90760c] <==
	[INFO] 10.244.1.2:46317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.074879373s
	[INFO] 10.244.1.2:44497 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033531012s
	[INFO] 10.244.2.2:37103 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133301s
	[INFO] 10.244.2.2:59848 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000203602s
	[INFO] 10.244.0.4:56770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116801s
	[INFO] 10.244.0.4:59242 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.106295392s
	[INFO] 10.244.1.2:49714 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000262602s
	[INFO] 10.244.1.2:37201 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105s
	[INFO] 10.244.2.2:35465 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078601s
	[INFO] 10.244.2.2:48750 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000676s
	[INFO] 10.244.0.4:47753 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.005962638s
	[INFO] 10.244.0.4:38588 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161301s
	[INFO] 10.244.0.4:55794 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000623s
	[INFO] 10.244.0.4:55062 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000404003s
	[INFO] 10.244.0.4:35274 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106201s
	[INFO] 10.244.0.4:33671 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000320102s
	[INFO] 10.244.1.2:54675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001183s
	[INFO] 10.244.1.2:57457 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115101s
	[INFO] 10.244.1.2:59030 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171001s
	[INFO] 10.244.2.2:51204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142501s
	[INFO] 10.244.0.4:53285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000460703s
	[INFO] 10.244.0.4:59478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059601s
	[INFO] 10.244.0.4:60738 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000693s
	[INFO] 10.244.1.2:57081 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000292302s
	[INFO] 10.244.2.2:56624 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147701s
	
	
	==> coredns [6c62393114dc] <==
	[INFO] 10.244.1.2:55146 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108s
	[INFO] 10.244.1.2:58020 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001094s
	[INFO] 10.244.2.2:49508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110701s
	[INFO] 10.244.2.2:49267 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000088401s
	[INFO] 10.244.2.2:50616 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000689s
	[INFO] 10.244.2.2:55615 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126901s
	[INFO] 10.244.2.2:50917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000216702s
	[INFO] 10.244.2.2:59737 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000724s
	[INFO] 10.244.0.4:33352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101901s
	[INFO] 10.244.0.4:40067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000076801s
	[INFO] 10.244.1.2:44122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072201s
	[INFO] 10.244.2.2:42201 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208401s
	[INFO] 10.244.2.2:39977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194501s
	[INFO] 10.244.2.2:47817 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167201s
	[INFO] 10.244.0.4:39376 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175601s
	[INFO] 10.244.1.2:58828 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184001s
	[INFO] 10.244.1.2:45992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184502s
	[INFO] 10.244.1.2:56858 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000192802s
	[INFO] 10.244.2.2:35837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000404202s
	[INFO] 10.244.2.2:57867 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129301s
	[INFO] 10.244.2.2:33588 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000341902s
	[INFO] 10.244.0.4:56879 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196601s
	[INFO] 10.244.0.4:57921 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131301s
	[INFO] 10.244.0.4:44088 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008s
	[INFO] 10.244.0.4:37195 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137201s
	
	
	==> describe nodes <==
	Name:               ha-736000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-736000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-736000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T19_04_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:04:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-736000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:31:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:29:41 +0000   Sun, 21 Apr 2024 19:04:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:29:41 +0000   Sun, 21 Apr 2024 19:04:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:29:41 +0000   Sun, 21 Apr 2024 19:04:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:29:41 +0000   Sun, 21 Apr 2024 19:05:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.203.42
	  Hostname:    ha-736000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6d9266bf460429381eee461582868fb
	  System UUID:                386751a7-3515-fc4b-adde-e0bf63ba6158
	  Boot ID:                    073f8dcd-ea4d-4254-b5e7-41fa38183661
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pnbbn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-bp9zb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-kv8pq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-736000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-wwkr9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-736000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-736000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-pqs5h                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-736000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-736000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-736000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-736000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-736000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-736000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-736000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-736000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-736000 event: Registered Node ha-736000 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-736000 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-736000 event: Registered Node ha-736000 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-736000 event: Registered Node ha-736000 in Controller
	
	
	Name:               ha-736000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-736000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-736000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T19_08_48_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:08:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-736000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:30:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Apr 2024 19:29:39 +0000   Sun, 21 Apr 2024 19:31:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Apr 2024 19:29:39 +0000   Sun, 21 Apr 2024 19:31:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Apr 2024 19:29:39 +0000   Sun, 21 Apr 2024 19:31:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Apr 2024 19:29:39 +0000   Sun, 21 Apr 2024 19:31:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.27.196.39
	  Hostname:    ha-736000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 de63462d1a074ebbba129500a0137334
	  System UUID:                192f459d-8063-de45-aa5e-eef009d1631a
	  Boot ID:                    6fea686d-c65d-4b3a-a988-3a4ad32f1726
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cmvt9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-736000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-7j6mw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-736000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-736000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-tj6tp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-736000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-736000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-736000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-736000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-736000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-736000-m02 event: Registered Node ha-736000-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-736000-m02 event: Registered Node ha-736000-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-736000-m02 event: Registered Node ha-736000-m02 in Controller
	  Normal  NodeNotReady             19s                node-controller  Node ha-736000-m02 status is now: NodeNotReady
	
	
	Name:               ha-736000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-736000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-736000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T19_12_49_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-736000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:31:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:29:31 +0000   Sun, 21 Apr 2024 19:12:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:29:31 +0000   Sun, 21 Apr 2024 19:12:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:29:31 +0000   Sun, 21 Apr 2024 19:12:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:29:31 +0000   Sun, 21 Apr 2024 19:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.195.51
	  Hostname:    ha-736000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e439a3f6e9b141fe9f08cc149b329157
	  System UUID:                000c990c-4060-cf46-bc96-3f05b191c853
	  Boot ID:                    63eca9ab-7fbf-46f1-bc92-b4952a619d28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nttt5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-736000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-hcfln                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-736000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-736000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-blktz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-736000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-736000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-736000-m03 event: Registered Node ha-736000-m03 in Controller
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-736000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-736000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-736000-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node ha-736000-m03 event: Registered Node ha-736000-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-736000-m03 event: Registered Node ha-736000-m03 in Controller
	
	
	Name:               ha-736000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-736000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=ha-736000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T19_18_09_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 19:18:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-736000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 19:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 19:28:52 +0000   Sun, 21 Apr 2024 19:18:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 19:28:52 +0000   Sun, 21 Apr 2024 19:18:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 19:28:52 +0000   Sun, 21 Apr 2024 19:18:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 19:28:52 +0000   Sun, 21 Apr 2024 19:18:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.207.237
	  Hostname:    ha-736000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c290db930d94781996ee77d8fa8e0ba
	  System UUID:                5cc683fa-6b27-f34e-a4fc-cf6150b63e3e
	  Boot ID:                    bc6ffec1-0dfd-4a7a-92f8-a667e03bbb7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8nkjq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-lmh69    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-736000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-736000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-736000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-736000-m04 event: Registered Node ha-736000-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-736000-m04 event: Registered Node ha-736000-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-736000-m04 event: Registered Node ha-736000-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-736000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr21 19:03] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.188275] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Apr21 19:04] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.112990] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.609517] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.236156] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.281585] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.935737] systemd-fstab-generator[1181]: Ignoring "noauto" option for root device
	[  +0.226833] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.236102] systemd-fstab-generator[1205]: Ignoring "noauto" option for root device
	[  +0.345609] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.130515] kauditd_printk_skb: 191 callbacks suppressed
	[ +11.474579] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.126499] kauditd_printk_skb: 4 callbacks suppressed
	[  +3.920488] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +6.986844] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.111623] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.817968] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.641866] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[ +16.104312] kauditd_printk_skb: 17 callbacks suppressed
	[Apr21 19:05] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.059320] kauditd_printk_skb: 4 callbacks suppressed
	[Apr21 19:08] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [256d65336b19] <==
	{"level":"warn","ts":"2024-04-21T19:31:51.619756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.621266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.641074Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.652772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.667396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.67407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.679795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.689895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.693611Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.696445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.700587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.706287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.729972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.735036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.740293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.757574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.792434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.796314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.821469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.833484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.84149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.856349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.870174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.883104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-21T19:31:51.89783Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8ae9a3b9f37dd1a5","from":"8ae9a3b9f37dd1a5","remote-peer-id":"8fa58fb966d43e88","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:31:52 up 29 min,  0 users,  load average: 0.68, 0.51, 0.42
	Linux ha-736000 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [67806b4246ae] <==
	I0421 19:31:20.368299       1 main.go:250] Node ha-736000-m04 has CIDR [10.244.3.0/24] 
	I0421 19:31:30.376560       1 main.go:223] Handling node with IPs: map[172.27.203.42:{}]
	I0421 19:31:30.376681       1 main.go:227] handling current node
	I0421 19:31:30.376698       1 main.go:223] Handling node with IPs: map[172.27.196.39:{}]
	I0421 19:31:30.376707       1 main.go:250] Node ha-736000-m02 has CIDR [10.244.1.0/24] 
	I0421 19:31:30.377096       1 main.go:223] Handling node with IPs: map[172.27.195.51:{}]
	I0421 19:31:30.377398       1 main.go:250] Node ha-736000-m03 has CIDR [10.244.2.0/24] 
	I0421 19:31:30.377652       1 main.go:223] Handling node with IPs: map[172.27.207.237:{}]
	I0421 19:31:30.377739       1 main.go:250] Node ha-736000-m04 has CIDR [10.244.3.0/24] 
	I0421 19:31:40.393799       1 main.go:223] Handling node with IPs: map[172.27.203.42:{}]
	I0421 19:31:40.394065       1 main.go:227] handling current node
	I0421 19:31:40.394143       1 main.go:223] Handling node with IPs: map[172.27.196.39:{}]
	I0421 19:31:40.394218       1 main.go:250] Node ha-736000-m02 has CIDR [10.244.1.0/24] 
	I0421 19:31:40.394413       1 main.go:223] Handling node with IPs: map[172.27.195.51:{}]
	I0421 19:31:40.394515       1 main.go:250] Node ha-736000-m03 has CIDR [10.244.2.0/24] 
	I0421 19:31:40.394598       1 main.go:223] Handling node with IPs: map[172.27.207.237:{}]
	I0421 19:31:40.394638       1 main.go:250] Node ha-736000-m04 has CIDR [10.244.3.0/24] 
	I0421 19:31:50.411348       1 main.go:223] Handling node with IPs: map[172.27.203.42:{}]
	I0421 19:31:50.411395       1 main.go:227] handling current node
	I0421 19:31:50.411408       1 main.go:223] Handling node with IPs: map[172.27.196.39:{}]
	I0421 19:31:50.411416       1 main.go:250] Node ha-736000-m02 has CIDR [10.244.1.0/24] 
	I0421 19:31:50.411621       1 main.go:223] Handling node with IPs: map[172.27.195.51:{}]
	I0421 19:31:50.411880       1 main.go:250] Node ha-736000-m03 has CIDR [10.244.2.0/24] 
	I0421 19:31:50.412202       1 main.go:223] Handling node with IPs: map[172.27.207.237:{}]
	I0421 19:31:50.412288       1 main.go:250] Node ha-736000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2b4f4a107736] <==
	E0421 19:13:57.827408       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61211: use of closed network connection
	E0421 19:13:58.395677       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61213: use of closed network connection
	E0421 19:13:59.434013       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61216: use of closed network connection
	E0421 19:14:10.005641       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61218: use of closed network connection
	E0421 19:14:10.571205       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61222: use of closed network connection
	E0421 19:14:21.141729       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61224: use of closed network connection
	E0421 19:14:21.692696       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61227: use of closed network connection
	E0421 19:14:32.272398       1 conn.go:339] Error on socket receive: read tcp 172.27.207.254:8443->172.27.192.1:61229: use of closed network connection
	I0421 19:18:02.193306       1 trace.go:236] Trace[968662344]: "Update" accept:application/json, */*,audit-id:fc1da098-a3e7-4448-a05c-1a221425b91b,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (21-Apr-2024 19:18:01.647) (total time: 545ms):
	Trace[968662344]: ["GuaranteedUpdate etcd3" audit-id:fc1da098-a3e7-4448-a05c-1a221425b91b,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 544ms (19:18:01.648)
	Trace[968662344]:  ---"Txn call completed" 542ms (19:18:02.193)]
	Trace[968662344]: [545.699444ms] [545.699444ms] END
	I0421 19:31:13.087972       1 trace.go:236] Trace[636292530]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.203.42,type:*v1.Endpoints,resource:apiServerIPInfo (21-Apr-2024 19:31:12.486) (total time: 601ms):
	Trace[636292530]: ---"Transaction prepared" 252ms (19:31:12.745)
	Trace[636292530]: ---"Txn call completed" 342ms (19:31:13.087)
	Trace[636292530]: [601.273036ms] [601.273036ms] END
	W0421 19:31:13.284763       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.195.51 172.27.203.42]
	I0421 19:31:13.795344       1 trace.go:236] Trace[2079857059]: "Update" accept:application/json, */*,audit-id:c5aa7b98-273d-46a7-8135-589da7593087,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (21-Apr-2024 19:31:13.244) (total time: 550ms):
	Trace[2079857059]: ["GuaranteedUpdate etcd3" audit-id:c5aa7b98-273d-46a7-8135-589da7593087,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 550ms (19:31:13.245)
	Trace[2079857059]:  ---"Txn call completed" 549ms (19:31:13.795)]
	Trace[2079857059]: [550.291251ms] [550.291251ms] END
	I0421 19:31:13.847233       1 trace.go:236] Trace[1521713120]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:cb41c1ed-4e17-4f89-bef1-903befb3efce,client:127.0.0.1,api-group:,api-version:v1,name:kubernetes,subresource:,namespace:default,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (21-Apr-2024 19:31:13.286) (total time: 560ms):
	Trace[1521713120]: ["GuaranteedUpdate etcd3" audit-id:cb41c1ed-4e17-4f89-bef1-903befb3efce,key:/services/endpoints/default/kubernetes,type:*core.Endpoints,resource:endpoints 560ms (19:31:13.286)
	Trace[1521713120]:  ---"Txn call completed" 559ms (19:31:13.847)]
	Trace[1521713120]: [560.819231ms] [560.819231ms] END
	
	
	==> kube-controller-manager [ee3dd828038f] <==
	I0421 19:08:47.174389       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-736000-m02"
	I0421 19:12:40.983106       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-736000-m03\" does not exist"
	I0421 19:12:41.012827       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-736000-m03" podCIDRs=["10.244.2.0/24"]
	I0421 19:12:42.251624       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-736000-m03"
	I0421 19:13:45.864292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.396399ms"
	I0421 19:13:45.935909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.291494ms"
	I0421 19:13:45.936613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.2µs"
	I0421 19:13:45.951115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="296.8µs"
	I0421 19:13:46.182540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="229.620312ms"
	I0421 19:13:46.605251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="422.057876ms"
	I0421 19:13:46.720224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.904957ms"
	I0421 19:13:46.720329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.3µs"
	I0421 19:13:49.055916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.850652ms"
	I0421 19:13:49.057169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.701µs"
	I0421 19:13:49.271105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.126136ms"
	I0421 19:13:49.271220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.001µs"
	I0421 19:13:49.515726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.358441ms"
	I0421 19:13:49.516494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.3µs"
	I0421 19:18:08.937902       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-736000-m04\" does not exist"
	I0421 19:18:08.976603       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-736000-m04" podCIDRs=["10.244.3.0/24"]
	I0421 19:18:12.353679       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-736000-m04"
	I0421 19:18:32.018631       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-736000-m04"
	I0421 19:31:32.582798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-736000-m04"
	I0421 19:31:32.880549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.899443ms"
	I0421 19:31:32.881177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.001µs"
	
	
	==> kube-proxy [a9cc5bf6a42d] <==
	I0421 19:05:01.047163       1 server_linux.go:69] "Using iptables proxy"
	I0421 19:05:01.086071       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.203.42"]
	I0421 19:05:01.144752       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 19:05:01.145018       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 19:05:01.145065       1 server_linux.go:165] "Using iptables Proxier"
	I0421 19:05:01.160872       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 19:05:01.162754       1 server.go:872] "Version info" version="v1.30.0"
	I0421 19:05:01.162823       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 19:05:01.175087       1 config.go:192] "Starting service config controller"
	I0421 19:05:01.175201       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 19:05:01.175229       1 config.go:101] "Starting endpoint slice config controller"
	I0421 19:05:01.175235       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 19:05:01.181076       1 config.go:319] "Starting node config controller"
	I0421 19:05:01.184185       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 19:05:01.184195       1 shared_informer.go:320] Caches are synced for node config
	I0421 19:05:01.276687       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 19:05:01.276699       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c4e32eeddc5d] <==
	W0421 19:04:40.749769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 19:04:40.749798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 19:04:40.754598       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 19:04:40.754684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 19:04:40.823363       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0421 19:04:40.823531       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0421 19:04:40.974036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 19:04:40.974369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 19:04:41.028360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 19:04:41.028489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 19:04:41.137209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 19:04:41.137352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 19:04:41.153169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 19:04:41.153512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 19:04:41.166919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 19:04:41.167070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0421 19:04:43.837225       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0421 19:18:09.191885       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8xz8j\": pod kube-proxy-8xz8j is already assigned to node \"ha-736000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8xz8j" node="ha-736000-m04"
	E0421 19:18:09.199193       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a2ae3f31-cf9c-4066-ae20-267a15b7036e(kube-system/kube-proxy-8xz8j) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8xz8j"
	E0421 19:18:09.199298       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8xz8j\": pod kube-proxy-8xz8j is already assigned to node \"ha-736000-m04\"" pod="kube-system/kube-proxy-8xz8j"
	I0421 19:18:09.199387       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8xz8j" node="ha-736000-m04"
	E0421 19:18:09.196838       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-69qr5\": pod kindnet-69qr5 is already assigned to node \"ha-736000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-69qr5" node="ha-736000-m04"
	E0421 19:18:09.201054       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a82c0195-8aa7-4c37-8572-8c85042a7cf2(kube-system/kindnet-69qr5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-69qr5"
	E0421 19:18:09.201097       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-69qr5\": pod kindnet-69qr5 is already assigned to node \"ha-736000-m04\"" pod="kube-system/kindnet-69qr5"
	I0421 19:18:09.201486       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-69qr5" node="ha-736000-m04"
	
	
	==> kubelet <==
	Apr 21 19:27:44 ha-736000 kubelet[2215]: E0421 19:27:44.025478    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:27:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:27:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:27:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:27:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:28:44 ha-736000 kubelet[2215]: E0421 19:28:44.020898    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:28:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:28:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:28:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:28:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:29:44 ha-736000 kubelet[2215]: E0421 19:29:44.027292    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:29:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:29:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:29:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:29:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:30:44 ha-736000 kubelet[2215]: E0421 19:30:44.025521    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:30:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:30:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:30:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:30:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 19:31:44 ha-736000 kubelet[2215]: E0421 19:31:44.019832    2215 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 19:31:44 ha-736000 kubelet[2215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 19:31:44 ha-736000 kubelet[2215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 19:31:44 ha-736000 kubelet[2215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 19:31:44 ha-736000 kubelet[2215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 19:31:43.280542   13420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-736000 -n ha-736000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-736000 -n ha-736000: (12.6662339s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-736000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (86.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (58s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-82tdr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-82tdr -- sh -c "ping -c 1 172.27.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-82tdr -- sh -c "ping -c 1 172.27.192.1": exit status 1 (10.5569805s)

                                                
                                                
-- stdout --
	PING 172.27.192.1 (172.27.192.1): 56 data bytes
	
	--- 172.27.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:10:06.415581    3624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.27.192.1) from pod (busybox-fc5497c4f-82tdr): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-l6544 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-l6544 -- sh -c "ping -c 1 172.27.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-l6544 -- sh -c "ping -c 1 172.27.192.1": exit status 1 (10.562926s)

                                                
                                                
-- stdout --
	PING 172.27.192.1 (172.27.192.1): 56 data bytes
	
	--- 172.27.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:10:17.515399    6572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.27.192.1) from pod (busybox-fc5497c4f-l6544): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-152500 -n multinode-152500
E0421 20:10:36.925146   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-152500 -n multinode-152500: (12.3695956s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 logs -n 25: (8.7151187s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-945100 ssh -- ls                    | mount-start-2-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:58 UTC | 21 Apr 24 19:58 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-945100                           | mount-start-1-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:58 UTC | 21 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-945100 ssh -- ls                    | mount-start-2-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:59 UTC | 21 Apr 24 19:59 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-945100                           | mount-start-2-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 19:59 UTC | 21 Apr 24 20:00 UTC |
	| start   | -p mount-start-2-945100                           | mount-start-2-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:00 UTC | 21 Apr 24 20:02 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:02 UTC |                     |
	|         | --profile mount-start-2-945100 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-945100 ssh -- ls                    | mount-start-2-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:02 UTC | 21 Apr 24 20:02 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-945100                           | mount-start-2-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:02 UTC | 21 Apr 24 20:02 UTC |
	| delete  | -p mount-start-1-945100                           | mount-start-1-945100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:02 UTC | 21 Apr 24 20:02 UTC |
	| start   | -p multinode-152500                               | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:02 UTC | 21 Apr 24 20:09 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- apply -f                   | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:09 UTC | 21 Apr 24 20:09 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- rollout                    | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:09 UTC | 21 Apr 24 20:09 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- get pods -o                | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:09 UTC | 21 Apr 24 20:09 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- get pods -o                | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | busybox-fc5497c4f-82tdr --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | busybox-fc5497c4f-l6544 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | busybox-fc5497c4f-82tdr --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | busybox-fc5497c4f-l6544 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | busybox-fc5497c4f-82tdr -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | busybox-fc5497c4f-l6544 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- get pods -o                | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | busybox-fc5497c4f-82tdr                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC |                     |
	|         | busybox-fc5497c4f-82tdr -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.192.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC | 21 Apr 24 20:10 UTC |
	|         | busybox-fc5497c4f-l6544                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-152500 -- exec                       | multinode-152500     | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:10 UTC |                     |
	|         | busybox-fc5497c4f-l6544 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.192.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 20:02:40
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 20:02:40.988235   12908 out.go:291] Setting OutFile to fd 1020 ...
	I0421 20:02:40.989243   12908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:02:40.989243   12908 out.go:304] Setting ErrFile to fd 272...
	I0421 20:02:40.989243   12908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:02:41.012241   12908 out.go:298] Setting JSON to false
	I0421 20:02:41.017074   12908 start.go:129] hostinfo: {"hostname":"minikube6","uptime":15636,"bootTime":1713714124,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 20:02:41.017379   12908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 20:02:41.023987   12908 out.go:177] * [multinode-152500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 20:02:41.029854   12908 notify.go:220] Checking for updates...
	I0421 20:02:41.033850   12908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:02:41.036579   12908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 20:02:41.039578   12908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 20:02:41.043579   12908 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 20:02:41.047613   12908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 20:02:41.050608   12908 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:02:41.051634   12908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 20:02:46.572747   12908 out.go:177] * Using the hyperv driver based on user configuration
	I0421 20:02:46.578846   12908 start.go:297] selected driver: hyperv
	I0421 20:02:46.578846   12908 start.go:901] validating driver "hyperv" against <nil>
	I0421 20:02:46.578846   12908 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 20:02:46.638679   12908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 20:02:46.639677   12908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:02:46.639677   12908 cni.go:84] Creating CNI manager for ""
	I0421 20:02:46.639677   12908 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0421 20:02:46.639677   12908 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0421 20:02:46.640723   12908 start.go:340] cluster config:
	{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:02:46.640764   12908 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:02:46.644604   12908 out.go:177] * Starting "multinode-152500" primary control-plane node in "multinode-152500" cluster
	I0421 20:02:46.648890   12908 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:02:46.648890   12908 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 20:02:46.648890   12908 cache.go:56] Caching tarball of preloaded images
	I0421 20:02:46.650042   12908 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 20:02:46.650042   12908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 20:02:46.650042   12908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:02:46.650042   12908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json: {Name:mkaff3599f0e15310b6a08478d6a477b24edd2a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:02:46.651996   12908 start.go:360] acquireMachinesLock for multinode-152500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:02:46.651996   12908 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-152500"
	I0421 20:02:46.651996   12908 start.go:93] Provisioning new machine with config: &{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clus
terName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 20:02:46.651996   12908 start.go:125] createHost starting for "" (driver="hyperv")
	I0421 20:02:46.654370   12908 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 20:02:46.655576   12908 start.go:159] libmachine.API.Create for "multinode-152500" (driver="hyperv")
	I0421 20:02:46.655576   12908 client.go:168] LocalClient.Create starting
	I0421 20:02:46.656011   12908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 20:02:46.656011   12908 main.go:141] libmachine: Decoding PEM data...
	I0421 20:02:46.656011   12908 main.go:141] libmachine: Parsing certificate...
	I0421 20:02:46.657181   12908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 20:02:46.657396   12908 main.go:141] libmachine: Decoding PEM data...
	I0421 20:02:46.657396   12908 main.go:141] libmachine: Parsing certificate...
	I0421 20:02:46.657596   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 20:02:48.896969   12908 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 20:02:48.896969   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:02:48.897088   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 20:02:50.732132   12908 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 20:02:50.732844   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:02:50.733000   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 20:02:52.321161   12908 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 20:02:52.321161   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:02:52.321676   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 20:02:56.082798   12908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 20:02:56.082798   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:02:56.088261   12908 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 20:02:56.639153   12908 main.go:141] libmachine: Creating SSH key...
	I0421 20:02:56.810209   12908 main.go:141] libmachine: Creating VM...
	I0421 20:02:56.810410   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 20:02:59.849105   12908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 20:02:59.849603   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:02:59.849852   12908 main.go:141] libmachine: Using switch "Default Switch"
	I0421 20:02:59.849978   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 20:03:01.716873   12908 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 20:03:01.716873   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:01.716873   12908 main.go:141] libmachine: Creating VHD
	I0421 20:03:01.717443   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 20:03:05.465670   12908 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9DAA87A9-DCA6-4841-8886-D222C4ACF42C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 20:03:05.465670   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:05.465670   12908 main.go:141] libmachine: Writing magic tar header
	I0421 20:03:05.465790   12908 main.go:141] libmachine: Writing SSH key tar header
	I0421 20:03:05.476003   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 20:03:08.640416   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:08.640798   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:08.640863   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\disk.vhd' -SizeBytes 20000MB
	I0421 20:03:11.230369   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:11.231286   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:11.231409   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-152500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0421 20:03:14.997943   12908 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-152500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 20:03:14.998028   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:14.998108   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-152500 -DynamicMemoryEnabled $false
	I0421 20:03:17.329118   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:17.329118   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:17.329386   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-152500 -Count 2
	I0421 20:03:19.508712   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:19.508712   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:19.508874   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-152500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\boot2docker.iso'
	I0421 20:03:22.048776   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:22.048776   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:22.048776   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-152500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\disk.vhd'
	I0421 20:03:24.722449   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:24.722670   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:24.722670   12908 main.go:141] libmachine: Starting VM...
	I0421 20:03:24.722670   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-152500
	I0421 20:03:27.944609   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:27.945281   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:27.945281   12908 main.go:141] libmachine: Waiting for host to start...
	I0421 20:03:27.945330   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:03:30.243949   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:03:30.243949   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:30.244362   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:03:32.864579   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:32.864579   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:33.866548   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:03:36.039700   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:03:36.039700   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:36.040160   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:03:38.627391   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:38.627391   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:39.642267   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:03:41.811379   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:03:41.811379   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:41.812325   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:03:44.384928   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:44.384928   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:45.386063   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:03:47.621356   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:03:47.621356   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:47.621637   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:03:50.261668   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:03:50.261940   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:51.274276   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:03:53.537753   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:03:53.537881   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:53.537881   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:03:56.244555   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:03:56.244555   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:56.245034   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:03:58.469204   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:03:58.470301   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:03:58.470301   12908 machine.go:94] provisionDockerMachine start ...
	I0421 20:03:58.470301   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:00.720836   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:00.720836   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:00.720836   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:03.371631   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:03.371631   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:03.379101   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:04:03.388971   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.198.190 22 <nil> <nil>}
	I0421 20:04:03.389973   12908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 20:04:03.521751   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 20:04:03.521751   12908 buildroot.go:166] provisioning hostname "multinode-152500"
	I0421 20:04:03.521751   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:05.704171   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:05.705241   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:05.705335   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:08.342116   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:08.343115   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:08.350025   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:04:08.350523   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.198.190 22 <nil> <nil>}
	I0421 20:04:08.351084   12908 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-152500 && echo "multinode-152500" | sudo tee /etc/hostname
	I0421 20:04:08.516362   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-152500
	
	I0421 20:04:08.516362   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:10.667401   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:10.668294   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:10.668294   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:13.261903   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:13.262916   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:13.269061   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:04:13.269311   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.198.190 22 <nil> <nil>}
	I0421 20:04:13.269311   12908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-152500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-152500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-152500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:04:13.427692   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:04:13.427757   12908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 20:04:13.427757   12908 buildroot.go:174] setting up certificates
	I0421 20:04:13.427870   12908 provision.go:84] configureAuth start
	I0421 20:04:13.427945   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:15.564208   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:15.564208   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:15.564494   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:18.162774   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:18.162874   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:18.163121   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:20.278616   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:20.279348   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:20.279434   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:22.856018   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:22.856018   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:22.856643   12908 provision.go:143] copyHostCerts
	I0421 20:04:22.856708   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 20:04:22.857235   12908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 20:04:22.857409   12908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 20:04:22.857887   12908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 20:04:22.858634   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 20:04:22.859216   12908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 20:04:22.859216   12908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 20:04:22.859216   12908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 20:04:22.860534   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 20:04:22.860534   12908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 20:04:22.860534   12908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 20:04:22.861225   12908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 20:04:22.862243   12908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-152500 san=[127.0.0.1 172.27.198.190 localhost minikube multinode-152500]
	I0421 20:04:23.223634   12908 provision.go:177] copyRemoteCerts
	I0421 20:04:23.238815   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:04:23.238984   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:25.406378   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:25.406578   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:25.406678   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:28.027917   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:28.028655   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:28.028891   12908 sshutil.go:53] new ssh client: &{IP:172.27.198.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:04:28.148549   12908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9096974s)
	I0421 20:04:28.148549   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 20:04:28.149362   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:04:28.202744   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 20:04:28.203267   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0421 20:04:28.271907   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 20:04:28.273122   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 20:04:28.324816   12908 provision.go:87] duration metric: took 14.8967476s to configureAuth
	I0421 20:04:28.324906   12908 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:04:28.325472   12908 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:04:28.325636   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:30.519920   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:30.519920   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:30.520006   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:33.169751   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:33.169873   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:33.176857   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:04:33.176950   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.198.190 22 <nil> <nil>}
	I0421 20:04:33.176950   12908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 20:04:33.315577   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 20:04:33.315692   12908 buildroot.go:70] root file system type: tmpfs
	I0421 20:04:33.315909   12908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 20:04:33.316028   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:35.462870   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:35.463864   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:35.464006   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:38.037202   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:38.037202   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:38.045827   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:04:38.045827   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.198.190 22 <nil> <nil>}
	I0421 20:04:38.045827   12908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 20:04:38.210642   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 20:04:38.210642   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:40.351551   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:40.352444   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:40.352444   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:42.988182   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:42.988182   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:42.994157   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:04:42.995250   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.198.190 22 <nil> <nil>}
	I0421 20:04:42.995250   12908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 20:04:45.265317   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 20:04:45.265523   12908 machine.go:97] duration metric: took 46.7948803s to provisionDockerMachine
	I0421 20:04:45.265614   12908 client.go:171] duration metric: took 1m58.6090811s to LocalClient.Create
	I0421 20:04:45.265690   12908 start.go:167] duration metric: took 1m58.6091725s to libmachine.API.Create "multinode-152500"
	I0421 20:04:45.265755   12908 start.go:293] postStartSetup for "multinode-152500" (driver="hyperv")
	I0421 20:04:45.265755   12908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:04:45.278161   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:04:45.278161   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:47.434013   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:47.434344   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:47.434511   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:50.080301   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:50.081253   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:50.081968   12908 sshutil.go:53] new ssh client: &{IP:172.27.198.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:04:50.196786   12908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9185888s)
	I0421 20:04:50.213635   12908 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:04:50.221598   12908 command_runner.go:130] > NAME=Buildroot
	I0421 20:04:50.221598   12908 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 20:04:50.221598   12908 command_runner.go:130] > ID=buildroot
	I0421 20:04:50.221674   12908 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 20:04:50.221674   12908 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 20:04:50.221674   12908 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:04:50.221674   12908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 20:04:50.222335   12908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 20:04:50.223743   12908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 20:04:50.223821   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 20:04:50.237905   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:04:50.256958   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 20:04:50.307790   12908 start.go:296] duration metric: took 5.0419974s for postStartSetup
	I0421 20:04:50.311498   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:52.452426   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:52.452426   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:52.453812   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:55.085515   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:55.085515   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:55.086535   12908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:04:55.089228   12908 start.go:128] duration metric: took 2m8.4362948s to createHost
	I0421 20:04:55.089377   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:04:57.277186   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:04:57.277767   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:57.277831   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:04:59.904635   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:04:59.904824   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:04:59.911677   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:04:59.911812   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.198.190 22 <nil> <nil>}
	I0421 20:04:59.911812   12908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 20:05:00.050893   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713729900.062635292
	
	I0421 20:05:00.050976   12908 fix.go:216] guest clock: 1713729900.062635292
	I0421 20:05:00.050976   12908 fix.go:229] Guest: 2024-04-21 20:05:00.062635292 +0000 UTC Remote: 2024-04-21 20:04:55.0892287 +0000 UTC m=+134.283322201 (delta=4.973406592s)
	I0421 20:05:00.051050   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:05:02.204291   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:05:02.204291   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:05:02.204876   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:05:04.804017   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:05:04.804017   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:05:04.810823   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:05:04.810823   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.198.190 22 <nil> <nil>}
	I0421 20:05:04.810823   12908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713729900
	I0421 20:05:04.970497   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 20:05:00 UTC 2024
	
	I0421 20:05:04.970497   12908 fix.go:236] clock set: Sun Apr 21 20:05:00 UTC 2024
	 (err=<nil>)
	I0421 20:05:04.970497   12908 start.go:83] releasing machines lock for "multinode-152500", held for 2m18.3174918s
	I0421 20:05:04.971194   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:05:07.092152   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:05:07.092152   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:05:07.092563   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:05:09.759147   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:05:09.759147   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:05:09.764317   12908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:05:09.764483   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:05:09.776199   12908 ssh_runner.go:195] Run: cat /version.json
	I0421 20:05:09.776199   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:05:11.939978   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:05:11.939978   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:05:11.939978   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:05:11.991603   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:05:11.991671   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:05:11.991671   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:05:14.624573   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:05:14.624573   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:05:14.624573   12908 sshutil.go:53] new ssh client: &{IP:172.27.198.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:05:14.664182   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:05:14.664182   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:05:14.664182   12908 sshutil.go:53] new ssh client: &{IP:172.27.198.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:05:14.870676   12908 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 20:05:14.870676   12908 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1063218s)
	I0421 20:05:14.870830   12908 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0421 20:05:14.870911   12908 ssh_runner.go:235] Completed: cat /version.json: (5.0946742s)
	I0421 20:05:14.885801   12908 ssh_runner.go:195] Run: systemctl --version
	I0421 20:05:14.895798   12908 command_runner.go:130] > systemd 252 (252)
	I0421 20:05:14.895897   12908 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0421 20:05:14.910646   12908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 20:05:14.925532   12908 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0421 20:05:14.925532   12908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:05:14.941253   12908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:05:14.975635   12908 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0421 20:05:14.975635   12908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:05:14.975635   12908 start.go:494] detecting cgroup driver to use...
	I0421 20:05:14.975635   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:05:15.016765   12908 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0421 20:05:15.031827   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 20:05:15.077675   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 20:05:15.100425   12908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 20:05:15.115208   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 20:05:15.152420   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:05:15.187586   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 20:05:15.223674   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:05:15.260762   12908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:05:15.301104   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 20:05:15.337571   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 20:05:15.371286   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 20:05:15.405926   12908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:05:15.426860   12908 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 20:05:15.441596   12908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:05:15.478781   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:05:15.685703   12908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 20:05:15.720291   12908 start.go:494] detecting cgroup driver to use...
	I0421 20:05:15.736869   12908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 20:05:15.764250   12908 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0421 20:05:15.764250   12908 command_runner.go:130] > [Unit]
	I0421 20:05:15.764250   12908 command_runner.go:130] > Description=Docker Application Container Engine
	I0421 20:05:15.764250   12908 command_runner.go:130] > Documentation=https://docs.docker.com
	I0421 20:05:15.764250   12908 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0421 20:05:15.764250   12908 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0421 20:05:15.764250   12908 command_runner.go:130] > StartLimitBurst=3
	I0421 20:05:15.764250   12908 command_runner.go:130] > StartLimitIntervalSec=60
	I0421 20:05:15.764250   12908 command_runner.go:130] > [Service]
	I0421 20:05:15.764250   12908 command_runner.go:130] > Type=notify
	I0421 20:05:15.764250   12908 command_runner.go:130] > Restart=on-failure
	I0421 20:05:15.764250   12908 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0421 20:05:15.764250   12908 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0421 20:05:15.764250   12908 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0421 20:05:15.764250   12908 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0421 20:05:15.764250   12908 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0421 20:05:15.764250   12908 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0421 20:05:15.764250   12908 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0421 20:05:15.764250   12908 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0421 20:05:15.764250   12908 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0421 20:05:15.764250   12908 command_runner.go:130] > ExecStart=
	I0421 20:05:15.764250   12908 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0421 20:05:15.764250   12908 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0421 20:05:15.764250   12908 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0421 20:05:15.764250   12908 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0421 20:05:15.764250   12908 command_runner.go:130] > LimitNOFILE=infinity
	I0421 20:05:15.764250   12908 command_runner.go:130] > LimitNPROC=infinity
	I0421 20:05:15.764250   12908 command_runner.go:130] > LimitCORE=infinity
	I0421 20:05:15.764250   12908 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0421 20:05:15.764250   12908 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0421 20:05:15.764815   12908 command_runner.go:130] > TasksMax=infinity
	I0421 20:05:15.764815   12908 command_runner.go:130] > TimeoutStartSec=0
	I0421 20:05:15.764815   12908 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0421 20:05:15.764815   12908 command_runner.go:130] > Delegate=yes
	I0421 20:05:15.764897   12908 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0421 20:05:15.764897   12908 command_runner.go:130] > KillMode=process
	I0421 20:05:15.764897   12908 command_runner.go:130] > [Install]
	I0421 20:05:15.764897   12908 command_runner.go:130] > WantedBy=multi-user.target
	I0421 20:05:15.779702   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:05:15.818824   12908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:05:15.872591   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:05:15.917777   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:05:15.958580   12908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 20:05:16.022561   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:05:16.047829   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:05:16.082997   12908 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0421 20:05:16.096509   12908 ssh_runner.go:195] Run: which cri-dockerd
	I0421 20:05:16.104378   12908 command_runner.go:130] > /usr/bin/cri-dockerd
	I0421 20:05:16.119513   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 20:05:16.144422   12908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 20:05:16.192354   12908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 20:05:16.421652   12908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 20:05:16.641132   12908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 20:05:16.641394   12908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 20:05:16.698885   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:05:16.930838   12908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 20:05:19.499344   12908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5683991s)
	I0421 20:05:19.511642   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 20:05:19.548634   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:05:19.590014   12908 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 20:05:19.810972   12908 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 20:05:20.039531   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:05:20.263773   12908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 20:05:20.310826   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:05:20.348815   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:05:20.570859   12908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 20:05:20.701555   12908 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 20:05:20.714499   12908 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 20:05:20.724512   12908 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0421 20:05:20.724512   12908 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 20:05:20.724512   12908 command_runner.go:130] > Device: 0,22	Inode: 890         Links: 1
	I0421 20:05:20.724512   12908 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0421 20:05:20.724512   12908 command_runner.go:130] > Access: 2024-04-21 20:05:20.610935865 +0000
	I0421 20:05:20.724793   12908 command_runner.go:130] > Modify: 2024-04-21 20:05:20.610935865 +0000
	I0421 20:05:20.724793   12908 command_runner.go:130] > Change: 2024-04-21 20:05:20.614935877 +0000
	I0421 20:05:20.724793   12908 command_runner.go:130] >  Birth: -
	I0421 20:05:20.724963   12908 start.go:562] Will wait 60s for crictl version
	I0421 20:05:20.740270   12908 ssh_runner.go:195] Run: which crictl
	I0421 20:05:20.747250   12908 command_runner.go:130] > /usr/bin/crictl
	I0421 20:05:20.760919   12908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:05:20.824506   12908 command_runner.go:130] > Version:  0.1.0
	I0421 20:05:20.825272   12908 command_runner.go:130] > RuntimeName:  docker
	I0421 20:05:20.825364   12908 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0421 20:05:20.825364   12908 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 20:05:20.825364   12908 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 20:05:20.835031   12908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:05:20.870312   12908 command_runner.go:130] > 26.0.1
	I0421 20:05:20.881624   12908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:05:20.918667   12908 command_runner.go:130] > 26.0.1
	I0421 20:05:20.924021   12908 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 20:05:20.924021   12908 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 20:05:20.929021   12908 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 20:05:20.929021   12908 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 20:05:20.929021   12908 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 20:05:20.929021   12908 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 20:05:20.933030   12908 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 20:05:20.933030   12908 ip.go:210] interface addr: 172.27.192.1/20
	I0421 20:05:20.946037   12908 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 20:05:20.953781   12908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:05:20.977978   12908 kubeadm.go:877] updating cluster {Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-1
52500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 20:05:20.978222   12908 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:05:20.988588   12908 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 20:05:21.017206   12908 docker.go:685] Got preloaded images: 
	I0421 20:05:21.017290   12908 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0421 20:05:21.031386   12908 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0421 20:05:21.055239   12908 command_runner.go:139] > {"Repositories":{}}
	I0421 20:05:21.070516   12908 ssh_runner.go:195] Run: which lz4
	I0421 20:05:21.078098   12908 command_runner.go:130] > /usr/bin/lz4
	I0421 20:05:21.078164   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0421 20:05:21.091291   12908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0421 20:05:21.099384   12908 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 20:05:21.099480   12908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0421 20:05:21.099480   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0421 20:05:23.042925   12908 docker.go:649] duration metric: took 1.9645628s to copy over tarball
	I0421 20:05:23.058977   12908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0421 20:05:31.933052   12908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8740096s)
	I0421 20:05:31.933162   12908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0421 20:05:32.007449   12908 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0421 20:05:32.027859   12908 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0421 20:05:32.028232   12908 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0421 20:05:32.075877   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:05:32.324008   12908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 20:05:35.724685   12908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.4006522s)
	I0421 20:05:35.736880   12908 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 20:05:35.760229   12908 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0421 20:05:35.760229   12908 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0421 20:05:35.760229   12908 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0421 20:05:35.760229   12908 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0421 20:05:35.760229   12908 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0421 20:05:35.760229   12908 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0421 20:05:35.760229   12908 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0421 20:05:35.760229   12908 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:05:35.760229   12908 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0421 20:05:35.760229   12908 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:05:35.760229   12908 kubeadm.go:928] updating node { 172.27.198.190 8443 v1.30.0 docker true true} ...
	I0421 20:05:35.760229   12908 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-152500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.198.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:05:35.771478   12908 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0421 20:05:35.808491   12908 command_runner.go:130] > cgroupfs
	I0421 20:05:35.809668   12908 cni.go:84] Creating CNI manager for ""
	I0421 20:05:35.809668   12908 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 20:05:35.809668   12908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:05:35.809749   12908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.198.190 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-152500 NodeName:multinode-152500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.198.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.198.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:05:35.810046   12908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.198.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-152500"
	  kubeletExtraArgs:
	    node-ip: 172.27.198.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.198.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:05:35.823493   12908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:05:35.842062   12908 command_runner.go:130] > kubeadm
	I0421 20:05:35.842062   12908 command_runner.go:130] > kubectl
	I0421 20:05:35.843293   12908 command_runner.go:130] > kubelet
	I0421 20:05:35.843346   12908 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:05:35.856210   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:05:35.876004   12908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0421 20:05:35.912318   12908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:05:35.949007   12908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0421 20:05:35.998541   12908 ssh_runner.go:195] Run: grep 172.27.198.190	control-plane.minikube.internal$ /etc/hosts
	I0421 20:05:36.005495   12908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.198.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:05:36.041640   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:05:36.248090   12908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:05:36.278015   12908 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500 for IP: 172.27.198.190
	I0421 20:05:36.278015   12908 certs.go:194] generating shared ca certs ...
	I0421 20:05:36.278110   12908 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:36.279064   12908 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 20:05:36.279651   12908 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 20:05:36.279946   12908 certs.go:256] generating profile certs ...
	I0421 20:05:36.279946   12908 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\client.key
	I0421 20:05:36.280527   12908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\client.crt with IP's: []
	I0421 20:05:36.895092   12908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\client.crt ...
	I0421 20:05:36.895092   12908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\client.crt: {Name:mkf7f780ce916668c8611cde38e3b650296576a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:36.896625   12908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\client.key ...
	I0421 20:05:36.896625   12908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\client.key: {Name:mkc6431ac29e01241f9fc2114bd17a2357f78f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:36.897978   12908 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.30cb2b00
	I0421 20:05:36.897978   12908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.30cb2b00 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.198.190]
	I0421 20:05:37.420825   12908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.30cb2b00 ...
	I0421 20:05:37.420825   12908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.30cb2b00: {Name:mk839ba5b5cf2b1035cba0c125ac7ab89f11cbc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:37.422301   12908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.30cb2b00 ...
	I0421 20:05:37.422301   12908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.30cb2b00: {Name:mkfdda5cad9790bc291f6befd7f7e6e0f51a2547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:37.422649   12908 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.30cb2b00 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt
	I0421 20:05:37.438124   12908 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.30cb2b00 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key
	I0421 20:05:37.439506   12908 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key
	I0421 20:05:37.439683   12908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt with IP's: []
	I0421 20:05:37.695954   12908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt ...
	I0421 20:05:37.695954   12908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt: {Name:mk639595bb6a2503cda6d18303677e979f1b4dfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:37.698138   12908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key ...
	I0421 20:05:37.698138   12908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key: {Name:mkd8dddec7db96258b382fcf0836029eb6b78592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:05:37.698498   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 20:05:37.699535   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 20:05:37.699836   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 20:05:37.699836   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 20:05:37.699836   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 20:05:37.699836   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 20:05:37.700480   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 20:05:37.709486   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 20:05:37.710542   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 20:05:37.711202   12908 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 20:05:37.711403   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 20:05:37.711539   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 20:05:37.711861   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 20:05:37.712201   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 20:05:37.712876   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 20:05:37.713485   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:05:37.713698   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 20:05:37.713885   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 20:05:37.715484   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:05:37.767154   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:05:37.814787   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:05:37.869153   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:05:37.914166   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 20:05:37.959261   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 20:05:38.009280   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:05:38.061693   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:05:38.108522   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:05:38.156094   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 20:05:38.206943   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 20:05:38.256932   12908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:05:38.305790   12908 ssh_runner.go:195] Run: openssl version
	I0421 20:05:38.316107   12908 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 20:05:38.332572   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 20:05:38.368894   12908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 20:05:38.376986   12908 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:05:38.377897   12908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:05:38.390528   12908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 20:05:38.402031   12908 command_runner.go:130] > 51391683
	I0421 20:05:38.417437   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 20:05:38.455425   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 20:05:38.496901   12908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 20:05:38.504453   12908 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:05:38.504453   12908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:05:38.517792   12908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 20:05:38.528334   12908 command_runner.go:130] > 3ec20f2e
	I0421 20:05:38.545127   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:05:38.596753   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:05:38.635969   12908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:05:38.643654   12908 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:05:38.643741   12908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:05:38.657661   12908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:05:38.667916   12908 command_runner.go:130] > b5213941
	I0421 20:05:38.683500   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:05:38.716452   12908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:05:38.723547   12908 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:05:38.724853   12908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:05:38.724853   12908 kubeadm.go:391] StartCluster: {Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-1525
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:05:38.737969   12908 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 20:05:38.777421   12908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 20:05:38.797881   12908 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0421 20:05:38.797881   12908 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0421 20:05:38.797881   12908 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0421 20:05:38.812233   12908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:05:38.845921   12908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:05:38.865346   12908 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0421 20:05:38.865346   12908 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0421 20:05:38.865346   12908 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0421 20:05:38.865346   12908 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:05:38.865346   12908 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:05:38.865346   12908 kubeadm.go:156] found existing configuration files:
	
	I0421 20:05:38.879294   12908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:05:38.897816   12908 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:05:38.898124   12908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:05:38.912587   12908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:05:38.946548   12908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:05:38.967623   12908 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:05:38.967623   12908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:05:38.986019   12908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:05:39.021566   12908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:05:39.040664   12908 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:05:39.041770   12908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:05:39.057559   12908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:05:39.089328   12908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:05:39.115303   12908 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:05:39.115682   12908 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:05:39.133623   12908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:05:39.157106   12908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0421 20:05:39.476934   12908 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0421 20:05:39.476934   12908 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0421 20:05:39.476934   12908 kubeadm.go:309] [preflight] Running pre-flight checks
	I0421 20:05:39.476934   12908 command_runner.go:130] > [preflight] Running pre-flight checks
	I0421 20:05:39.671000   12908 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:05:39.671099   12908 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0421 20:05:39.671403   12908 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:05:39.671403   12908 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0421 20:05:39.671601   12908 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:05:39.671673   12908 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0421 20:05:39.995725   12908 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:05:39.995813   12908 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:05:40.001651   12908 out.go:204]   - Generating certificates and keys ...
	I0421 20:05:40.001651   12908 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0421 20:05:40.001651   12908 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0421 20:05:40.001651   12908 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0421 20:05:40.001651   12908 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0421 20:05:40.246670   12908 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 20:05:40.246670   12908 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0421 20:05:40.408278   12908 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0421 20:05:40.408356   12908 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0421 20:05:40.949507   12908 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0421 20:05:40.949573   12908 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0421 20:05:41.148362   12908 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0421 20:05:41.148362   12908 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0421 20:05:41.271013   12908 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0421 20:05:41.271013   12908 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0421 20:05:41.271013   12908 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-152500] and IPs [172.27.198.190 127.0.0.1 ::1]
	I0421 20:05:41.271013   12908 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-152500] and IPs [172.27.198.190 127.0.0.1 ::1]
	I0421 20:05:41.466026   12908 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0421 20:05:41.466663   12908 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0421 20:05:41.467021   12908 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-152500] and IPs [172.27.198.190 127.0.0.1 ::1]
	I0421 20:05:41.467042   12908 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-152500] and IPs [172.27.198.190 127.0.0.1 ::1]
	I0421 20:05:41.799569   12908 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 20:05:41.799569   12908 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0421 20:05:42.021362   12908 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 20:05:42.021456   12908 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0421 20:05:42.109879   12908 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0421 20:05:42.109953   12908 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0421 20:05:42.110387   12908 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:05:42.110489   12908 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:05:42.325208   12908 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:05:42.326155   12908 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:05:42.562100   12908 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:05:42.562192   12908 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:05:42.837798   12908 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:05:42.837869   12908 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:05:43.417799   12908 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:05:43.418208   12908 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:05:43.661844   12908 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:05:43.661844   12908 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:05:43.662543   12908 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:05:43.662543   12908 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:05:43.667973   12908 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:05:43.671991   12908 out.go:204]   - Booting up control plane ...
	I0421 20:05:43.667973   12908 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:05:43.672228   12908 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:05:43.672365   12908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:05:43.672685   12908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:05:43.672685   12908 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:05:43.672912   12908 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:05:43.672912   12908 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:05:43.697959   12908 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:05:43.698034   12908 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:05:43.699539   12908 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:05:43.699649   12908 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:05:43.699796   12908 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0421 20:05:43.699796   12908 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0421 20:05:43.924257   12908 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:05:43.924257   12908 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0421 20:05:43.924257   12908 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:05:43.924257   12908 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:05:44.927198   12908 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001890989s
	I0421 20:05:44.927198   12908 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001890989s
	I0421 20:05:44.927518   12908 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:05:44.927518   12908 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0421 20:05:52.429047   12908 kubeadm.go:309] [api-check] The API server is healthy after 7.502830076s
	I0421 20:05:52.429047   12908 command_runner.go:130] > [api-check] The API server is healthy after 7.502830076s
	I0421 20:05:52.452403   12908 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:05:52.452403   12908 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0421 20:05:52.483159   12908 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:05:52.483159   12908 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0421 20:05:52.535688   12908 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:05:52.535688   12908 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0421 20:05:52.536298   12908 kubeadm.go:309] [mark-control-plane] Marking the node multinode-152500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:05:52.536391   12908 command_runner.go:130] > [mark-control-plane] Marking the node multinode-152500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0421 20:05:52.573687   12908 kubeadm.go:309] [bootstrap-token] Using token: ly7ah8.pmactml7z8if029w
	I0421 20:05:52.578227   12908 out.go:204]   - Configuring RBAC rules ...
	I0421 20:05:52.574143   12908 command_runner.go:130] > [bootstrap-token] Using token: ly7ah8.pmactml7z8if029w
	I0421 20:05:52.578644   12908 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:05:52.578644   12908 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0421 20:05:52.590837   12908 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:05:52.590837   12908 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0421 20:05:52.604787   12908 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:05:52.604941   12908 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0421 20:05:52.615668   12908 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:05:52.615761   12908 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0421 20:05:52.621910   12908 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:05:52.621910   12908 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0421 20:05:52.631609   12908 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:05:52.631609   12908 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0421 20:05:52.840983   12908 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:05:52.841155   12908 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0421 20:05:53.309441   12908 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0421 20:05:53.309753   12908 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0421 20:05:53.843999   12908 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0421 20:05:53.843999   12908 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0421 20:05:53.846105   12908 kubeadm.go:309] 
	I0421 20:05:53.846475   12908 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0421 20:05:53.846552   12908 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0421 20:05:53.846574   12908 kubeadm.go:309] 
	I0421 20:05:53.846814   12908 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0421 20:05:53.846877   12908 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0421 20:05:53.846877   12908 kubeadm.go:309] 
	I0421 20:05:53.846945   12908 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0421 20:05:53.847008   12908 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0421 20:05:53.847169   12908 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:05:53.847169   12908 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0421 20:05:53.847320   12908 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:05:53.847404   12908 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0421 20:05:53.847404   12908 kubeadm.go:309] 
	I0421 20:05:53.847578   12908 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0421 20:05:53.847643   12908 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0421 20:05:53.847643   12908 kubeadm.go:309] 
	I0421 20:05:53.847705   12908 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:05:53.847827   12908 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0421 20:05:53.847855   12908 kubeadm.go:309] 
	I0421 20:05:53.848042   12908 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0421 20:05:53.848042   12908 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0421 20:05:53.848042   12908 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:05:53.848042   12908 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0421 20:05:53.848042   12908 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:05:53.848042   12908 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0421 20:05:53.848042   12908 kubeadm.go:309] 
	I0421 20:05:53.848042   12908 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:05:53.848042   12908 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0421 20:05:53.848042   12908 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0421 20:05:53.848042   12908 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0421 20:05:53.848042   12908 kubeadm.go:309] 
	I0421 20:05:53.848042   12908 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ly7ah8.pmactml7z8if029w \
	I0421 20:05:53.848042   12908 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ly7ah8.pmactml7z8if029w \
	I0421 20:05:53.848042   12908 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 \
	I0421 20:05:53.848042   12908 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 \
	I0421 20:05:53.848042   12908 command_runner.go:130] > 	--control-plane 
	I0421 20:05:53.848042   12908 kubeadm.go:309] 	--control-plane 
	I0421 20:05:53.848042   12908 kubeadm.go:309] 
	I0421 20:05:53.849143   12908 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:05:53.849143   12908 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0421 20:05:53.849299   12908 kubeadm.go:309] 
	I0421 20:05:53.849425   12908 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ly7ah8.pmactml7z8if029w \
	I0421 20:05:53.849425   12908 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ly7ah8.pmactml7z8if029w \
	I0421 20:05:53.849425   12908 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 
	I0421 20:05:53.849425   12908 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 
	I0421 20:05:53.849961   12908 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:05:53.849961   12908 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:05:53.849961   12908 cni.go:84] Creating CNI manager for ""
	I0421 20:05:53.849961   12908 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0421 20:05:53.852668   12908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0421 20:05:53.869251   12908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 20:05:53.878243   12908 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0421 20:05:53.878243   12908 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0421 20:05:53.878243   12908 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0421 20:05:53.878243   12908 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 20:05:53.878243   12908 command_runner.go:130] > Access: 2024-04-21 20:03:54.932038900 +0000
	I0421 20:05:53.878243   12908 command_runner.go:130] > Modify: 2024-04-18 23:25:47.000000000 +0000
	I0421 20:05:53.878243   12908 command_runner.go:130] > Change: 2024-04-21 20:03:45.358000000 +0000
	I0421 20:05:53.878243   12908 command_runner.go:130] >  Birth: -
	I0421 20:05:53.878243   12908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 20:05:53.879234   12908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0421 20:05:53.934670   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 20:05:54.667057   12908 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0421 20:05:54.667057   12908 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0421 20:05:54.667057   12908 command_runner.go:130] > serviceaccount/kindnet created
	I0421 20:05:54.667057   12908 command_runner.go:130] > daemonset.apps/kindnet created
	I0421 20:05:54.667057   12908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:05:54.684079   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-152500 minikube.k8s.io/updated_at=2024_04_21T20_05_54_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=multinode-152500 minikube.k8s.io/primary=true
	I0421 20:05:54.684079   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:54.698327   12908 command_runner.go:130] > -16
	I0421 20:05:54.698435   12908 ops.go:34] apiserver oom_adj: -16
	I0421 20:05:54.918048   12908 command_runner.go:130] > node/multinode-152500 labeled
	I0421 20:05:54.918263   12908 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0421 20:05:54.933096   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:55.070596   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:55.444979   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:55.577229   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:55.948062   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:56.073344   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:56.435608   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:56.562497   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:56.934764   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:57.062661   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:57.434154   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:57.553693   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:57.935892   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:58.048483   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:58.437179   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:58.566437   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:58.942885   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:59.074570   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:59.447271   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:05:59.571810   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:05:59.948536   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:00.080565   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:00.435279   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:00.554392   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:00.939744   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:01.068232   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:01.444748   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:01.566617   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:01.944113   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:02.071228   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:02.433954   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:02.563662   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:02.939776   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:03.072369   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:03.435419   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:03.582074   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:03.942536   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:04.068958   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:04.442077   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:04.565590   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:04.944387   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:05.067593   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:05.442700   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:05.574686   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:05.950763   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:06.090628   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:06.437169   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:06.569255   12908 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0421 20:06:06.944744   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0421 20:06:07.113683   12908 command_runner.go:130] > NAME      SECRETS   AGE
	I0421 20:06:07.114279   12908 command_runner.go:130] > default   0         0s
	I0421 20:06:07.114406   12908 kubeadm.go:1107] duration metric: took 12.447132s to wait for elevateKubeSystemPrivileges
	W0421 20:06:07.114504   12908 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0421 20:06:07.114641   12908 kubeadm.go:393] duration metric: took 28.3894445s to StartCluster
	I0421 20:06:07.114756   12908 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:06:07.115045   12908 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:06:07.118759   12908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:06:07.121373   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0421 20:06:07.121467   12908 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 20:06:07.121467   12908 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:06:07.125863   12908 out.go:177] * Verifying Kubernetes components...
	I0421 20:06:07.121569   12908 addons.go:69] Setting storage-provisioner=true in profile "multinode-152500"
	I0421 20:06:07.121646   12908 addons.go:69] Setting default-storageclass=true in profile "multinode-152500"
	I0421 20:06:07.122147   12908 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:06:07.130784   12908 addons.go:234] Setting addon storage-provisioner=true in "multinode-152500"
	I0421 20:06:07.130784   12908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-152500"
	I0421 20:06:07.130784   12908 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:06:07.131784   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:06:07.131784   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:06:07.145786   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:06:07.501426   12908 command_runner.go:130] > apiVersion: v1
	I0421 20:06:07.501426   12908 command_runner.go:130] > data:
	I0421 20:06:07.501426   12908 command_runner.go:130] >   Corefile: |
	I0421 20:06:07.501426   12908 command_runner.go:130] >     .:53 {
	I0421 20:06:07.501545   12908 command_runner.go:130] >         errors
	I0421 20:06:07.501545   12908 command_runner.go:130] >         health {
	I0421 20:06:07.501545   12908 command_runner.go:130] >            lameduck 5s
	I0421 20:06:07.501545   12908 command_runner.go:130] >         }
	I0421 20:06:07.501545   12908 command_runner.go:130] >         ready
	I0421 20:06:07.501598   12908 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0421 20:06:07.501598   12908 command_runner.go:130] >            pods insecure
	I0421 20:06:07.501598   12908 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0421 20:06:07.501655   12908 command_runner.go:130] >            ttl 30
	I0421 20:06:07.501655   12908 command_runner.go:130] >         }
	I0421 20:06:07.501655   12908 command_runner.go:130] >         prometheus :9153
	I0421 20:06:07.501655   12908 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0421 20:06:07.501655   12908 command_runner.go:130] >            max_concurrent 1000
	I0421 20:06:07.501718   12908 command_runner.go:130] >         }
	I0421 20:06:07.501718   12908 command_runner.go:130] >         cache 30
	I0421 20:06:07.501718   12908 command_runner.go:130] >         loop
	I0421 20:06:07.501718   12908 command_runner.go:130] >         reload
	I0421 20:06:07.501718   12908 command_runner.go:130] >         loadbalance
	I0421 20:06:07.501718   12908 command_runner.go:130] >     }
	I0421 20:06:07.501718   12908 command_runner.go:130] > kind: ConfigMap
	I0421 20:06:07.501718   12908 command_runner.go:130] > metadata:
	I0421 20:06:07.501718   12908 command_runner.go:130] >   creationTimestamp: "2024-04-21T20:05:53Z"
	I0421 20:06:07.501784   12908 command_runner.go:130] >   name: coredns
	I0421 20:06:07.501784   12908 command_runner.go:130] >   namespace: kube-system
	I0421 20:06:07.501784   12908 command_runner.go:130] >   resourceVersion: "261"
	I0421 20:06:07.501784   12908 command_runner.go:130] >   uid: 81d7573a-bd7e-4f1f-83d6-59a06caaee4f
	I0421 20:06:07.502058   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0421 20:06:07.597161   12908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:06:08.204444   12908 command_runner.go:130] > configmap/coredns replaced
	I0421 20:06:08.204964   12908 start.go:946] {"host.minikube.internal": 172.27.192.1} host record injected into CoreDNS's ConfigMap
	I0421 20:06:08.206066   12908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:06:08.207218   12908 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.198.190:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:06:08.208974   12908 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 20:06:08.209161   12908 node_ready.go:35] waiting up to 6m0s for node "multinode-152500" to be "Ready" ...
	I0421 20:06:08.209648   12908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:06:08.209648   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:08.209648   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:08.209648   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:08.209648   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:08.209648   12908 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.198.190:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:06:08.211751   12908 round_trippers.go:463] GET https://172.27.198.190:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0421 20:06:08.211751   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:08.211751   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:08.211751   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:08.234858   12908 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0421 20:06:08.235408   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:08.235408   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:08.235408   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:08 GMT
	I0421 20:06:08.235502   12908 round_trippers.go:580]     Audit-Id: 282b0236-c6cf-4cd5-95ee-3cdeffa460db
	I0421 20:06:08.235502   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:08.235502   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:08.235502   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:08.235898   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:08.236140   12908 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0421 20:06:08.236140   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:08.236451   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:08.236451   12908 round_trippers.go:580]     Content-Length: 291
	I0421 20:06:08.236451   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:08 GMT
	I0421 20:06:08.236451   12908 round_trippers.go:580]     Audit-Id: bc0047cb-c436-4276-b787-77740340cc7c
	I0421 20:06:08.236451   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:08.236451   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:08.236451   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:08.236451   12908 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2a7680f-587b-4bb1-b1ed-b19b270e65d7","resourceVersion":"388","creationTimestamp":"2024-04-21T20:05:53Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0421 20:06:08.237408   12908 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2a7680f-587b-4bb1-b1ed-b19b270e65d7","resourceVersion":"388","creationTimestamp":"2024-04-21T20:05:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0421 20:06:08.237546   12908 round_trippers.go:463] PUT https://172.27.198.190:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0421 20:06:08.237574   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:08.237574   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:08.237647   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:08.237647   12908 round_trippers.go:473]     Content-Type: application/json
	I0421 20:06:08.257920   12908 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0421 20:06:08.257920   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:08.257920   12908 round_trippers.go:580]     Audit-Id: 85116a03-a1b0-46c9-9675-3eea00fb48e1
	I0421 20:06:08.257920   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:08.257920   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:08.257920   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:08.258042   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:08.258042   12908 round_trippers.go:580]     Content-Length: 291
	I0421 20:06:08.258042   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:08 GMT
	I0421 20:06:08.258042   12908 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2a7680f-587b-4bb1-b1ed-b19b270e65d7","resourceVersion":"391","creationTimestamp":"2024-04-21T20:05:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0421 20:06:08.721668   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:08.722045   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:08.722045   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:08.722045   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:08.721668   12908 round_trippers.go:463] GET https://172.27.198.190:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0421 20:06:08.722153   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:08.722153   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:08.722153   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:08.730486   12908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:06:08.730889   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:08.730889   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:08.730889   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:08.730889   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:08.730889   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:08.730889   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:08 GMT
	I0421 20:06:08.730988   12908 round_trippers.go:580]     Audit-Id: 869ecb93-022c-4fd0-89b1-874c632fd17c
	I0421 20:06:08.733477   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:08.734477   12908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 20:06:08.734765   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:08.734765   12908 round_trippers.go:580]     Audit-Id: 10815b1c-fdfd-4186-85f0-0239e9768aff
	I0421 20:06:08.734765   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:08.734765   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:08.734765   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:08.734847   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:08.734847   12908 round_trippers.go:580]     Content-Length: 291
	I0421 20:06:08.734847   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:08 GMT
	I0421 20:06:08.735565   12908 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c2a7680f-587b-4bb1-b1ed-b19b270e65d7","resourceVersion":"402","creationTimestamp":"2024-04-21T20:05:53Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0421 20:06:08.735638   12908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-152500" context rescaled to 1 replicas
	I0421 20:06:09.212139   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:09.212302   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:09.212302   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:09.212302   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:09.216057   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:09.216057   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:09.216057   12908 round_trippers.go:580]     Audit-Id: 7b259c5c-ceb7-4d90-887c-0c7d647e90da
	I0421 20:06:09.216057   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:09.216057   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:09.216057   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:09.216201   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:09.216201   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:09 GMT
	I0421 20:06:09.216358   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:09.412789   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:06:09.413691   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:09.414556   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:06:09.414743   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:09.414743   12908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:06:09.420642   12908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:06:09.414743   12908 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.198.190:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:06:09.421421   12908 addons.go:234] Setting addon default-storageclass=true in "multinode-152500"
	I0421 20:06:09.423419   12908 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:06:09.423419   12908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0421 20:06:09.423419   12908 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:06:09.423419   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:06:09.424616   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:06:09.717866   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:09.717866   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:09.717866   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:09.717866   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:09.728203   12908 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 20:06:09.728326   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:09.728326   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:09 GMT
	I0421 20:06:09.728326   12908 round_trippers.go:580]     Audit-Id: c9e693d5-f278-472f-a436-8047f50a9980
	I0421 20:06:09.728392   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:09.728392   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:09.728392   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:09.728392   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:09.728392   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:10.211020   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:10.211086   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:10.211086   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:10.211086   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:10.214477   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:10.214594   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:10.214594   12908 round_trippers.go:580]     Audit-Id: c0530a7f-5c58-4740-a906-7c8731e61a37
	I0421 20:06:10.214594   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:10.214594   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:10.214671   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:10.214671   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:10.214671   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:10 GMT
	I0421 20:06:10.215087   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:10.215833   12908 node_ready.go:53] node "multinode-152500" has status "Ready":"False"
	I0421 20:06:10.717974   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:10.718173   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:10.718238   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:10.718238   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:10.723053   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:10.723395   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:10.723395   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:10.723395   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:10 GMT
	I0421 20:06:10.723506   12908 round_trippers.go:580]     Audit-Id: 2c5ec440-22fb-48c1-aeb7-a9d0e0e5a473
	I0421 20:06:10.723506   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:10.723541   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:10.723541   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:10.723770   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:11.224712   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:11.224794   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:11.224973   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:11.224973   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:11.229292   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:11.229292   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:11.229292   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:11.229292   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:11.229394   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:11.229394   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:11 GMT
	I0421 20:06:11.229394   12908 round_trippers.go:580]     Audit-Id: 97d11b4b-2321-43ce-9be8-45fc0bc91ed0
	I0421 20:06:11.229394   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:11.229754   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:11.716550   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:11.716550   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:11.716550   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:11.716550   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:11.721408   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:11.721846   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:11.721846   12908 round_trippers.go:580]     Audit-Id: 3505ab41-8502-438d-8c73-4957f66c8c5f
	I0421 20:06:11.721846   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:11.721846   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:11.721846   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:11.721846   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:11.721985   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:11 GMT
	I0421 20:06:11.722394   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:11.764394   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:06:11.765396   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:11.765540   12908 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0421 20:06:11.765540   12908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0421 20:06:11.765540   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:06:11.784364   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:06:11.784759   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:11.784871   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:06:12.221809   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:12.221886   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:12.221886   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:12.221886   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:12.227286   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:06:12.227853   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:12.227853   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:12 GMT
	I0421 20:06:12.227853   12908 round_trippers.go:580]     Audit-Id: 2c67d438-ad10-4c30-a66c-6b4f85a045ec
	I0421 20:06:12.227853   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:12.227853   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:12.227853   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:12.227853   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:12.228355   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:12.229293   12908 node_ready.go:53] node "multinode-152500" has status "Ready":"False"
	I0421 20:06:12.712655   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:12.712766   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:12.712766   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:12.712766   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:12.719622   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:06:12.719622   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:12.719622   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:12.719622   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:12 GMT
	I0421 20:06:12.719622   12908 round_trippers.go:580]     Audit-Id: adeef4b1-0fbd-441a-aa3d-1022c64572c5
	I0421 20:06:12.719622   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:12.719622   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:12.719622   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:12.719622   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:13.220851   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:13.220851   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:13.221151   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:13.221151   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:13.224932   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:13.225067   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:13.225067   12908 round_trippers.go:580]     Audit-Id: f309954c-dd9d-4992-8b91-5e639dc47f90
	I0421 20:06:13.225067   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:13.225067   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:13.225067   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:13.225067   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:13.225067   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:13 GMT
	I0421 20:06:13.225528   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:13.710479   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:13.710554   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:13.710554   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:13.710554   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:13.713937   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:13.713937   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:13.713937   12908 round_trippers.go:580]     Audit-Id: b8d45e36-f1f4-456b-967c-76372491bee1
	I0421 20:06:13.713937   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:13.713937   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:13.713937   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:13.713937   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:13.713937   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:13 GMT
	I0421 20:06:13.714944   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:14.063930   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:06:14.063930   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:14.064382   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:06:14.215130   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:14.215130   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:14.215130   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:14.215130   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:14.220385   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:06:14.220748   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:14.220876   12908 round_trippers.go:580]     Audit-Id: 51a4884d-1bf2-4773-baaa-7aa5c9aedcec
	I0421 20:06:14.220967   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:14.220967   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:14.220967   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:14.220967   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:14.220967   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:14 GMT
	I0421 20:06:14.221495   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:14.516296   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:06:14.516376   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:14.516437   12908 sshutil.go:53] new ssh client: &{IP:172.27.198.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:06:14.666092   12908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0421 20:06:14.719058   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:14.719058   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:14.719058   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:14.719058   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:14.722057   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:06:14.722057   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:14.722057   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:14.722057   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:14.722057   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:14.722057   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:14.722057   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:14 GMT
	I0421 20:06:14.722057   12908 round_trippers.go:580]     Audit-Id: ba4d15dd-91df-46e0-8ec3-aa0792b40cfe
	I0421 20:06:14.723095   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:14.723358   12908 node_ready.go:53] node "multinode-152500" has status "Ready":"False"
	I0421 20:06:15.210263   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:15.210374   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:15.210374   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:15.210374   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:15.348595   12908 round_trippers.go:574] Response Status: 200 OK in 138 milliseconds
	I0421 20:06:15.348595   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:15.348595   12908 round_trippers.go:580]     Audit-Id: fb333823-a4b2-4333-8444-b7e86d05fbdb
	I0421 20:06:15.348595   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:15.348595   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:15.348595   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:15.348595   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:15.348595   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:15 GMT
	I0421 20:06:15.349582   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:15.657331   12908 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0421 20:06:15.657397   12908 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0421 20:06:15.657397   12908 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0421 20:06:15.657397   12908 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0421 20:06:15.657397   12908 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0421 20:06:15.657397   12908 command_runner.go:130] > pod/storage-provisioner created
	I0421 20:06:15.717536   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:15.717600   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:15.717600   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:15.717600   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:15.721186   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:15.721186   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:15.721186   12908 round_trippers.go:580]     Audit-Id: fc2f36b8-c528-4d84-a944-88c4ff703185
	I0421 20:06:15.721186   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:15.721186   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:15.721186   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:15.721186   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:15.721186   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:15 GMT
	I0421 20:06:15.721186   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:16.224489   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:16.224565   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:16.224565   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:16.224565   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:16.229353   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:16.229353   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:16.229353   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:16.229353   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:16.229353   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:16.229353   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:16 GMT
	I0421 20:06:16.229353   12908 round_trippers.go:580]     Audit-Id: eb532356-75e0-42eb-83e8-c973e29a7abc
	I0421 20:06:16.229353   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:16.230017   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:16.712464   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:16.712691   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:16.712691   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:16.712691   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:16.716080   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:16.716080   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:16.716080   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:16.716080   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:16.716080   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:16.716080   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:16.716080   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:16 GMT
	I0421 20:06:16.716505   12908 round_trippers.go:580]     Audit-Id: 01f1c7db-562d-4fcb-88d7-2fced10908b7
	I0421 20:06:16.716648   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:16.762102   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:06:16.762102   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:16.762886   12908 sshutil.go:53] new ssh client: &{IP:172.27.198.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:06:16.904826   12908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0421 20:06:17.084656   12908 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0421 20:06:17.084656   12908 round_trippers.go:463] GET https://172.27.198.190:8443/apis/storage.k8s.io/v1/storageclasses
	I0421 20:06:17.084656   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:17.084656   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:17.084656   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:17.088660   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:17.088724   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:17.088724   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:17.088724   12908 round_trippers.go:580]     Content-Length: 1273
	I0421 20:06:17.088724   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:17 GMT
	I0421 20:06:17.088724   12908 round_trippers.go:580]     Audit-Id: 8e97ee23-9bb9-448c-851c-96be92e35809
	I0421 20:06:17.088724   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:17.088840   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:17.088840   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:17.088883   12908 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"standard","uid":"97043457-0b7a-4900-a437-59b480772da9","resourceVersion":"424","creationTimestamp":"2024-04-21T20:06:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-21T20:06:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0421 20:06:17.089186   12908 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"97043457-0b7a-4900-a437-59b480772da9","resourceVersion":"424","creationTimestamp":"2024-04-21T20:06:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-21T20:06:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0421 20:06:17.089186   12908 round_trippers.go:463] PUT https://172.27.198.190:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0421 20:06:17.089186   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:17.089186   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:17.089186   12908 round_trippers.go:473]     Content-Type: application/json
	I0421 20:06:17.089186   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:17.093786   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:17.094837   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:17.094837   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:17.094837   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:17.094837   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:17.094837   12908 round_trippers.go:580]     Content-Length: 1220
	I0421 20:06:17.094837   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:17 GMT
	I0421 20:06:17.094837   12908 round_trippers.go:580]     Audit-Id: e487bad4-7524-40c7-8fdf-4b8fb0c811cb
	I0421 20:06:17.094837   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:17.094837   12908 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"97043457-0b7a-4900-a437-59b480772da9","resourceVersion":"424","creationTimestamp":"2024-04-21T20:06:17Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-04-21T20:06:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0421 20:06:17.098760   12908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0421 20:06:17.101756   12908 addons.go:505] duration metric: took 9.9792199s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0421 20:06:17.216511   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:17.216511   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:17.216511   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:17.216511   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:17.221271   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:17.221309   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:17.221309   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:17.221309   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:17.221309   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:17.221309   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:17 GMT
	I0421 20:06:17.221309   12908 round_trippers.go:580]     Audit-Id: b5110420-3d61-473e-8302-f2892a3809e8
	I0421 20:06:17.221309   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:17.221567   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:17.222069   12908 node_ready.go:53] node "multinode-152500" has status "Ready":"False"
	I0421 20:06:17.716794   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:17.717005   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:17.717005   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:17.717005   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:17.721084   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:17.721084   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:17.721084   12908 round_trippers.go:580]     Audit-Id: d17d4a6e-f5d5-4b2e-8956-354deb36fb6b
	I0421 20:06:17.721084   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:17.721084   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:17.721084   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:17.721084   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:17.721084   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:17 GMT
	I0421 20:06:17.721952   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:18.216659   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:18.216776   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:18.216776   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:18.216776   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:18.221430   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:18.221430   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:18.221430   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:18.221842   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:18.221842   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:18.221842   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:18 GMT
	I0421 20:06:18.221842   12908 round_trippers.go:580]     Audit-Id: f0db32f9-4817-4589-b1d6-70002a5223dc
	I0421 20:06:18.221842   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:18.222157   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:18.715004   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:18.715004   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:18.715004   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:18.715004   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:18.720697   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:06:18.721020   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:18.721020   12908 round_trippers.go:580]     Audit-Id: 7bbb9159-c173-4745-a34c-deeb506606a2
	I0421 20:06:18.721152   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:18.721152   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:18.721172   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:18.721172   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:18.721172   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:18 GMT
	I0421 20:06:18.721587   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:19.214826   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:19.214826   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:19.214826   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:19.214826   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:19.218288   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:19.218288   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:19.219150   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:19.219150   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:19.219150   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:19 GMT
	I0421 20:06:19.219150   12908 round_trippers.go:580]     Audit-Id: 17af7c77-be91-414c-8f61-336dfac3c1d9
	I0421 20:06:19.219150   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:19.219150   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:19.219753   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:19.712905   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:19.713180   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:19.713180   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:19.713180   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:19.716702   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:19.716702   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:19.716702   12908 round_trippers.go:580]     Audit-Id: 2902511e-b713-4640-a82d-76b967e20b36
	I0421 20:06:19.717691   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:19.717691   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:19.717716   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:19.717716   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:19.717716   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:19 GMT
	I0421 20:06:19.718369   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:19.718941   12908 node_ready.go:53] node "multinode-152500" has status "Ready":"False"
	I0421 20:06:20.212750   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:20.212750   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:20.212981   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:20.212981   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:20.215347   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:06:20.215347   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:20.215347   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:20.215347   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:20.215347   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:20 GMT
	I0421 20:06:20.215347   12908 round_trippers.go:580]     Audit-Id: f0487c0d-b5a7-490e-ba06-e50ffd9c2291
	I0421 20:06:20.215347   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:20.215347   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:20.216462   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:20.713577   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:20.713577   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:20.713577   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:20.713577   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:20.718549   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:20.718549   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:20.718549   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:20.718549   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:20.719491   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:20 GMT
	I0421 20:06:20.719491   12908 round_trippers.go:580]     Audit-Id: 4bfaacea-72e3-47e0-9e36-99c4abd4af89
	I0421 20:06:20.719491   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:20.719491   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:20.719641   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"364","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0421 20:06:21.220373   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:21.220510   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:21.220650   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:21.220650   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:21.225265   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:21.225265   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:21.225265   12908 round_trippers.go:580]     Audit-Id: e95fcab5-45f4-4bc5-8aa9-b1aa6138ae46
	I0421 20:06:21.225265   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:21.225265   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:21.225265   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:21.225265   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:21.225676   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:21 GMT
	I0421 20:06:21.226833   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:21.226833   12908 node_ready.go:49] node "multinode-152500" has status "Ready":"True"
	I0421 20:06:21.226833   12908 node_ready.go:38] duration metric: took 13.0175765s for node "multinode-152500" to be "Ready" ...
	I0421 20:06:21.226833   12908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:06:21.226833   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods
	I0421 20:06:21.226833   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:21.226833   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:21.226833   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:21.233818   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:06:21.233818   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:21.233818   12908 round_trippers.go:580]     Audit-Id: a078f143-18f4-4b33-8140-84b4c2c5c902
	I0421 20:06:21.233818   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:21.234034   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:21.234034   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:21.234034   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:21.234034   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:21 GMT
	I0421 20:06:21.236241   12908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"434","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0421 20:06:21.241917   12908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:21.241987   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:06:21.242118   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:21.242118   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:21.242118   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:21.249898   12908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:06:21.249898   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:21.249898   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:21.249898   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:21.250305   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:21.250305   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:21.250305   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:21 GMT
	I0421 20:06:21.250305   12908 round_trippers.go:580]     Audit-Id: 83bc2da0-520d-47e6-8873-166c3b471de3
	I0421 20:06:21.250520   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"434","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0421 20:06:21.250720   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:21.251256   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:21.251322   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:21.251322   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:21.258467   12908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:06:21.258467   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:21.258467   12908 round_trippers.go:580]     Audit-Id: e21df065-4913-47de-ba2a-e41b07e84f4c
	I0421 20:06:21.258467   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:21.258467   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:21.258467   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:21.258467   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:21.258467   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:21 GMT
	I0421 20:06:21.259507   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:21.744851   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:06:21.745092   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:21.745125   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:21.745125   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:21.748146   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:21.748146   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:21.748146   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:21.748146   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:21 GMT
	I0421 20:06:21.748146   12908 round_trippers.go:580]     Audit-Id: 7fa9bf85-8967-4bba-bd44-2b7f990a8800
	I0421 20:06:21.748146   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:21.748146   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:21.749007   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:21.749233   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"434","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0421 20:06:21.750045   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:21.750204   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:21.750204   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:21.750204   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:21.753030   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:06:21.753030   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:21.753030   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:21.753030   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:21.753030   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:21 GMT
	I0421 20:06:21.753030   12908 round_trippers.go:580]     Audit-Id: bc42f5b0-bb97-439e-999d-62e1fa032811
	I0421 20:06:21.753030   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:21.753030   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:21.753030   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:22.253417   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:06:22.253417   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:22.253417   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:22.253417   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:22.258000   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:22.258000   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:22.258000   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:22 GMT
	I0421 20:06:22.258000   12908 round_trippers.go:580]     Audit-Id: bd1686bd-b2ed-4e4a-a41a-766e2a087406
	I0421 20:06:22.258000   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:22.258000   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:22.258095   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:22.258095   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:22.258249   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"434","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0421 20:06:22.259058   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:22.259058   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:22.259058   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:22.259116   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:22.261048   12908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 20:06:22.263693   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:22.263752   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:22.263752   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:22 GMT
	I0421 20:06:22.263752   12908 round_trippers.go:580]     Audit-Id: 04d0493c-ea58-4f7d-a85d-1539b98f2651
	I0421 20:06:22.263752   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:22.263784   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:22.263784   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:22.264040   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:22.743378   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:06:22.743378   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:22.743378   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:22.743378   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:22.746994   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:22.746994   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:22.746994   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:22.746994   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:22 GMT
	I0421 20:06:22.747866   12908 round_trippers.go:580]     Audit-Id: 411789e5-315e-4da6-a34e-7450259f54c7
	I0421 20:06:22.747866   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:22.747866   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:22.747866   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:22.747928   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"434","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0421 20:06:22.748670   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:22.748757   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:22.748757   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:22.748757   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:22.754744   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:06:22.754744   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:22.754744   12908 round_trippers.go:580]     Audit-Id: 59ea43eb-1b8b-476a-991f-6fd38443b410
	I0421 20:06:22.754744   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:22.754744   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:22.754744   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:22.754744   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:22.754744   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:22 GMT
	I0421 20:06:22.754744   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:23.244013   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:06:23.244155   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.244155   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.244155   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.255127   12908 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 20:06:23.255127   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.255127   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.255514   12908 round_trippers.go:580]     Audit-Id: 69d4840d-c2c4-4a0f-aa3d-ba873c602b9c
	I0421 20:06:23.255514   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.255514   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.255514   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.255514   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.257041   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"445","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6809 chars]
	I0421 20:06:23.258020   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:23.258020   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.258078   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.258078   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.270885   12908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 20:06:23.270885   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.270960   12908 round_trippers.go:580]     Audit-Id: 1c4c696c-7584-454b-ae9a-c694621ede55
	I0421 20:06:23.270960   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.270960   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.270960   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.270960   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.270960   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.271228   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:23.271806   12908 pod_ready.go:102] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:06:23.747412   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:06:23.747475   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.747475   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.747475   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.752062   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:23.752062   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.752062   12908 round_trippers.go:580]     Audit-Id: eede4540-321a-465b-bc75-8de6025a2c1e
	I0421 20:06:23.752623   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.752623   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.752623   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.752623   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.752623   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.752985   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0421 20:06:23.753513   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:23.753647   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.753666   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.753666   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.760226   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:06:23.760226   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.760226   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.760226   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.760226   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.760226   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.760226   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.760226   12908 round_trippers.go:580]     Audit-Id: f917ebee-d8ac-4aa5-b52f-c7736720129a
	I0421 20:06:23.760226   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:23.760959   12908 pod_ready.go:92] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:06:23.760959   12908 pod_ready.go:81] duration metric: took 2.5190018s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.760959   12908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.760959   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-152500
	I0421 20:06:23.761505   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.761505   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.761505   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.763363   12908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 20:06:23.764355   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.764355   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.764355   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.764355   12908 round_trippers.go:580]     Audit-Id: b6607892-3ddd-449c-a915-addfc656f851
	I0421 20:06:23.764355   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.764355   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.764355   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.764355   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"e5f399f5-b04e-4ac1-8646-d103d2d8f74a","resourceVersion":"322","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.198.190:2379","kubernetes.io/config.hash":"5b332f12d2b025e34ff6a19060e65329","kubernetes.io/config.mirror":"5b332f12d2b025e34ff6a19060e65329","kubernetes.io/config.seen":"2024-04-21T20:05:53.333716613Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0421 20:06:23.765480   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:23.765540   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.765540   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.765540   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.768063   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:06:23.768063   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.768063   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.768063   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.768063   12908 round_trippers.go:580]     Audit-Id: 21bab1ed-7d51-4f37-b76f-9f676b6f948c
	I0421 20:06:23.768063   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.768063   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.768063   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.768063   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:23.769038   12908 pod_ready.go:92] pod "etcd-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:06:23.769038   12908 pod_ready.go:81] duration metric: took 8.0792ms for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.769038   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.769038   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-152500
	I0421 20:06:23.769038   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.769038   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.769038   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.772057   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:23.772489   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.772489   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.772489   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.772489   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.772489   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.772489   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.772489   12908 round_trippers.go:580]     Audit-Id: a5a8dd73-2d21-489b-a189-7f9c9d5b7384
	I0421 20:06:23.772489   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-152500","namespace":"kube-system","uid":"52744df0-77af-4caf-b69d-af2789c25eab","resourceVersion":"324","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.198.190:8443","kubernetes.io/config.hash":"795735df3eb25834ddaf2db596e59a82","kubernetes.io/config.mirror":"795735df3eb25834ddaf2db596e59a82","kubernetes.io/config.seen":"2024-04-21T20:05:53.333722413Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0421 20:06:23.773116   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:23.773116   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.773116   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.773116   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.775705   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:06:23.775705   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.775705   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.775705   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.775705   12908 round_trippers.go:580]     Audit-Id: c32c8527-1aac-474a-ab10-b69ebc1ba931
	I0421 20:06:23.775705   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.775705   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.775705   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.776986   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:23.777960   12908 pod_ready.go:92] pod "kube-apiserver-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:06:23.777960   12908 pod_ready.go:81] duration metric: took 8.9217ms for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.778021   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.778089   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:06:23.778153   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.778153   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.778153   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.780713   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:06:23.780713   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.780713   12908 round_trippers.go:580]     Audit-Id: 4d2b3b76-297e-4bad-a1ed-23fd18b59ed1
	I0421 20:06:23.780713   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.780713   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.780713   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.780713   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.780713   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.781492   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"330","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0421 20:06:23.782087   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:23.782087   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.782087   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.782087   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.784718   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:06:23.784718   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.784718   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.784718   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.784718   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.784718   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.784718   12908 round_trippers.go:580]     Audit-Id: 5caf6200-fbff-4541-a627-234b162ab708
	I0421 20:06:23.785064   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.785064   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:23.785601   12908 pod_ready.go:92] pod "kube-controller-manager-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:06:23.785669   12908 pod_ready.go:81] duration metric: took 7.6482ms for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.785699   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.785699   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:06:23.785699   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.785699   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.785699   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.788689   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:06:23.788689   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.788689   12908 round_trippers.go:580]     Audit-Id: 6b50e48f-2078-430a-9317-b98dfb943cef
	I0421 20:06:23.788689   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.788689   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.788689   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.788689   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.788689   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.788689   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kl8t2","generateName":"kube-proxy-","namespace":"kube-system","uid":"4154de9b-3a7c-4ed6-a987-82b6d539dc7e","resourceVersion":"405","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0421 20:06:23.789691   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:23.789691   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.789691   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.789691   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.792725   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:23.792725   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.792725   12908 round_trippers.go:580]     Audit-Id: 260fe8af-b6de-4cc9-bd27-a446be24a465
	I0421 20:06:23.792725   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.792725   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.792725   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.792725   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.792725   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.792725   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:23.792725   12908 pod_ready.go:92] pod "kube-proxy-kl8t2" in "kube-system" namespace has status "Ready":"True"
	I0421 20:06:23.792725   12908 pod_ready.go:81] duration metric: took 7.0258ms for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.792725   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:23.950541   12908 request.go:629] Waited for 157.6183ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:06:23.950672   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:06:23.950672   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:23.950672   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:23.950672   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:23.954079   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:23.954079   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:23.954079   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:23 GMT
	I0421 20:06:23.954079   12908 round_trippers.go:580]     Audit-Id: 215f7da8-00b8-4574-8fb8-b0147688b56d
	I0421 20:06:23.954079   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:23.954079   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:23.954521   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:23.954521   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:23.954704   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-152500","namespace":"kube-system","uid":"8178553d-7f1d-423a-89e5-41b226b2bb6d","resourceVersion":"328","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.mirror":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.seen":"2024-04-21T20:05:53.333724913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0421 20:06:24.153490   12908 request.go:629] Waited for 198.2073ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:24.153490   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:06:24.153960   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:24.153960   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:24.153960   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:24.160587   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:06:24.160587   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:24.160587   12908 round_trippers.go:580]     Audit-Id: e9a0dcf5-fcaa-44cc-a0a0-a9d92504a90e
	I0421 20:06:24.160587   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:24.160587   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:24.160587   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:24.160587   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:24.160587   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:24 GMT
	I0421 20:06:24.162215   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"428","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0421 20:06:24.162304   12908 pod_ready.go:92] pod "kube-scheduler-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:06:24.162304   12908 pod_ready.go:81] duration metric: took 369.5767ms for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:06:24.162304   12908 pod_ready.go:38] duration metric: took 2.9354501s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:06:24.162304   12908 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:06:24.176254   12908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:06:24.206679   12908 command_runner.go:130] > 2040
	I0421 20:06:24.206872   12908 api_server.go:72] duration metric: took 17.0851781s to wait for apiserver process to appear ...
	I0421 20:06:24.206916   12908 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:06:24.206998   12908 api_server.go:253] Checking apiserver healthz at https://172.27.198.190:8443/healthz ...
	I0421 20:06:24.216630   12908 api_server.go:279] https://172.27.198.190:8443/healthz returned 200:
	ok
	I0421 20:06:24.216824   12908 round_trippers.go:463] GET https://172.27.198.190:8443/version
	I0421 20:06:24.216877   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:24.216877   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:24.216877   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:24.218232   12908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 20:06:24.218232   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:24.218232   12908 round_trippers.go:580]     Content-Length: 263
	I0421 20:06:24.218232   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:24 GMT
	I0421 20:06:24.218881   12908 round_trippers.go:580]     Audit-Id: bbaac51e-1948-4a16-b54a-00b56acbc60f
	I0421 20:06:24.218881   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:24.218881   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:24.218881   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:24.218881   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:24.218949   12908 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0421 20:06:24.219099   12908 api_server.go:141] control plane version: v1.30.0
	I0421 20:06:24.219099   12908 api_server.go:131] duration metric: took 12.1451ms to wait for apiserver health ...
	I0421 20:06:24.219099   12908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:06:24.356546   12908 request.go:629] Waited for 137.446ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods
	I0421 20:06:24.357206   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods
	I0421 20:06:24.357206   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:24.357206   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:24.357206   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:24.362821   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:06:24.362821   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:24.362821   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:24.362821   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:24.362821   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:24.362821   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:24.362821   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:24 GMT
	I0421 20:06:24.363270   12908 round_trippers.go:580]     Audit-Id: 2b75c59b-59b8-4b7b-b244-72960088e920
	I0421 20:06:24.365584   12908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0421 20:06:24.368703   12908 system_pods.go:59] 8 kube-system pods found
	I0421 20:06:24.368798   12908 system_pods.go:61] "coredns-7db6d8ff4d-v7pf8" [2973ebed-006d-4495-b1a7-7b4472e46f23] Running
	I0421 20:06:24.368798   12908 system_pods.go:61] "etcd-multinode-152500" [e5f399f5-b04e-4ac1-8646-d103d2d8f74a] Running
	I0421 20:06:24.368798   12908 system_pods.go:61] "kindnet-vb8ws" [3e6ed0fc-724b-4a52-9738-2d9ef84b57eb] Running
	I0421 20:06:24.368798   12908 system_pods.go:61] "kube-apiserver-multinode-152500" [52744df0-77af-4caf-b69d-af2789c25eab] Running
	I0421 20:06:24.368798   12908 system_pods.go:61] "kube-controller-manager-multinode-152500" [a3e58103-1fb6-4e2f-aa47-76f3a8cfd758] Running
	I0421 20:06:24.368798   12908 system_pods.go:61] "kube-proxy-kl8t2" [4154de9b-3a7c-4ed6-a987-82b6d539dc7e] Running
	I0421 20:06:24.368798   12908 system_pods.go:61] "kube-scheduler-multinode-152500" [8178553d-7f1d-423a-89e5-41b226b2bb6d] Running
	I0421 20:06:24.368798   12908 system_pods.go:61] "storage-provisioner" [2eea731d-6a0b-4404-8518-a088d879b487] Running
	I0421 20:06:24.368798   12908 system_pods.go:74] duration metric: took 149.6983ms to wait for pod list to return data ...
	I0421 20:06:24.368798   12908 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:06:24.561285   12908 request.go:629] Waited for 192.06ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/namespaces/default/serviceaccounts
	I0421 20:06:24.561285   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/default/serviceaccounts
	I0421 20:06:24.561285   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:24.561285   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:24.561285   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:24.565908   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:06:24.565908   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:24.565908   12908 round_trippers.go:580]     Content-Length: 261
	I0421 20:06:24.566164   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:24 GMT
	I0421 20:06:24.566164   12908 round_trippers.go:580]     Audit-Id: d4577227-3cd4-4eae-81d5-f45259ebe09e
	I0421 20:06:24.566164   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:24.566164   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:24.566164   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:24.566164   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:24.566238   12908 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a4620806-bbb0-42e7-af50-a593b05fe653","resourceVersion":"352","creationTimestamp":"2024-04-21T20:06:07Z"}}]}
	I0421 20:06:24.566819   12908 default_sa.go:45] found service account: "default"
	I0421 20:06:24.566881   12908 default_sa.go:55] duration metric: took 198.019ms for default service account to be created ...
	I0421 20:06:24.566881   12908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:06:24.748403   12908 request.go:629] Waited for 181.2217ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods
	I0421 20:06:24.748666   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods
	I0421 20:06:24.748666   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:24.748666   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:24.748666   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:24.754446   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:06:24.754446   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:24.754629   12908 round_trippers.go:580]     Audit-Id: 965d648d-7c0b-41f0-b38c-cbfffcdad9c9
	I0421 20:06:24.754629   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:24.754629   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:24.754691   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:24.754691   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:24.754691   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:24 GMT
	I0421 20:06:24.756159   12908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0421 20:06:24.760040   12908 system_pods.go:86] 8 kube-system pods found
	I0421 20:06:24.760040   12908 system_pods.go:89] "coredns-7db6d8ff4d-v7pf8" [2973ebed-006d-4495-b1a7-7b4472e46f23] Running
	I0421 20:06:24.760040   12908 system_pods.go:89] "etcd-multinode-152500" [e5f399f5-b04e-4ac1-8646-d103d2d8f74a] Running
	I0421 20:06:24.760040   12908 system_pods.go:89] "kindnet-vb8ws" [3e6ed0fc-724b-4a52-9738-2d9ef84b57eb] Running
	I0421 20:06:24.760040   12908 system_pods.go:89] "kube-apiserver-multinode-152500" [52744df0-77af-4caf-b69d-af2789c25eab] Running
	I0421 20:06:24.760040   12908 system_pods.go:89] "kube-controller-manager-multinode-152500" [a3e58103-1fb6-4e2f-aa47-76f3a8cfd758] Running
	I0421 20:06:24.760040   12908 system_pods.go:89] "kube-proxy-kl8t2" [4154de9b-3a7c-4ed6-a987-82b6d539dc7e] Running
	I0421 20:06:24.760040   12908 system_pods.go:89] "kube-scheduler-multinode-152500" [8178553d-7f1d-423a-89e5-41b226b2bb6d] Running
	I0421 20:06:24.760040   12908 system_pods.go:89] "storage-provisioner" [2eea731d-6a0b-4404-8518-a088d879b487] Running
	I0421 20:06:24.760040   12908 system_pods.go:126] duration metric: took 193.1577ms to wait for k8s-apps to be running ...
	I0421 20:06:24.760040   12908 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:06:24.776248   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:06:24.802361   12908 system_svc.go:56] duration metric: took 42.3212ms WaitForService to wait for kubelet
	I0421 20:06:24.802361   12908 kubeadm.go:576] duration metric: took 17.6806633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:06:24.802361   12908 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:06:24.950677   12908 request.go:629] Waited for 147.712ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/nodes
	I0421 20:06:24.950677   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes
	I0421 20:06:24.950677   12908 round_trippers.go:469] Request Headers:
	I0421 20:06:24.951062   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:06:24.951062   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:06:24.954542   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:06:24.954542   12908 round_trippers.go:577] Response Headers:
	I0421 20:06:24.954542   12908 round_trippers.go:580]     Audit-Id: 5ccee5d7-47c9-4ff3-b9f9-6dc115294701
	I0421 20:06:24.954542   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:06:24.954542   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:06:24.954542   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:06:24.954542   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:06:24.954542   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:06:24 GMT
	I0421 20:06:24.955380   12908 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"455","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5013 chars]
	I0421 20:06:24.956213   12908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:06:24.956275   12908 node_conditions.go:123] node cpu capacity is 2
	I0421 20:06:24.956345   12908 node_conditions.go:105] duration metric: took 153.9823ms to run NodePressure ...
	I0421 20:06:24.956345   12908 start.go:240] waiting for startup goroutines ...
	I0421 20:06:24.956345   12908 start.go:245] waiting for cluster config update ...
	I0421 20:06:24.956345   12908 start.go:254] writing updated cluster config ...
	I0421 20:06:24.961905   12908 out.go:177] 
	I0421 20:06:24.965468   12908 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:06:24.972509   12908 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:06:24.972509   12908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:06:24.978046   12908 out.go:177] * Starting "multinode-152500-m02" worker node in "multinode-152500" cluster
	I0421 20:06:24.981926   12908 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:06:24.982012   12908 cache.go:56] Caching tarball of preloaded images
	I0421 20:06:24.982425   12908 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 20:06:24.982544   12908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 20:06:24.982793   12908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:06:24.987859   12908 start.go:360] acquireMachinesLock for multinode-152500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:06:24.987859   12908 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-152500-m02"
	I0421 20:06:24.988390   12908 start.go:93] Provisioning new machine with config: &{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clus
terName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:06:24.988617   12908 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0421 20:06:24.991488   12908 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0421 20:06:24.992063   12908 start.go:159] libmachine.API.Create for "multinode-152500" (driver="hyperv")
	I0421 20:06:24.992129   12908 client.go:168] LocalClient.Create starting
	I0421 20:06:24.992129   12908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0421 20:06:24.992703   12908 main.go:141] libmachine: Decoding PEM data...
	I0421 20:06:24.992703   12908 main.go:141] libmachine: Parsing certificate...
	I0421 20:06:24.992703   12908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0421 20:06:24.992703   12908 main.go:141] libmachine: Decoding PEM data...
	I0421 20:06:24.992703   12908 main.go:141] libmachine: Parsing certificate...
	I0421 20:06:24.992703   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0421 20:06:27.030826   12908 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0421 20:06:27.030826   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:27.031615   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0421 20:06:28.848039   12908 main.go:141] libmachine: [stdout =====>] : False
	
	I0421 20:06:28.848261   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:28.848323   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 20:06:30.398556   12908 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 20:06:30.398556   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:30.399105   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 20:06:34.168455   12908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 20:06:34.168584   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:34.170841   12908 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0421 20:06:34.707751   12908 main.go:141] libmachine: Creating SSH key...
	I0421 20:06:35.151859   12908 main.go:141] libmachine: Creating VM...
	I0421 20:06:35.151859   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0421 20:06:38.175820   12908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0421 20:06:38.176567   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:38.176649   12908 main.go:141] libmachine: Using switch "Default Switch"
	I0421 20:06:38.176764   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0421 20:06:40.043501   12908 main.go:141] libmachine: [stdout =====>] : True
	
	I0421 20:06:40.043501   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:40.043595   12908 main.go:141] libmachine: Creating VHD
	I0421 20:06:40.043595   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0421 20:06:43.837020   12908 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CB76E25D-A3C7-4AF0-AF31-0FD3F677379E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0421 20:06:43.837660   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:43.837660   12908 main.go:141] libmachine: Writing magic tar header
	I0421 20:06:43.837660   12908 main.go:141] libmachine: Writing SSH key tar header
	I0421 20:06:43.848178   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0421 20:06:47.048503   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:06:47.048503   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:47.048503   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\disk.vhd' -SizeBytes 20000MB
	I0421 20:06:49.681256   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:06:49.681256   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:49.681256   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-152500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0421 20:06:53.481577   12908 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-152500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0421 20:06:53.481577   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:53.482165   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-152500-m02 -DynamicMemoryEnabled $false
	I0421 20:06:55.803929   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:06:55.804540   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:55.804634   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-152500-m02 -Count 2
	I0421 20:06:58.012467   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:06:58.012467   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:06:58.012842   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-152500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\boot2docker.iso'
	I0421 20:07:00.708747   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:07:00.708747   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:00.708873   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-152500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\disk.vhd'
	I0421 20:07:03.431053   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:07:03.431053   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:03.431053   12908 main.go:141] libmachine: Starting VM...
	I0421 20:07:03.431053   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-152500-m02
	I0421 20:07:06.581890   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:07:06.582879   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:06.582879   12908 main.go:141] libmachine: Waiting for host to start...
	I0421 20:07:06.582879   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:08.932928   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:08.933167   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:08.933269   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:11.531624   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:07:11.531624   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:12.543301   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:14.786252   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:14.786252   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:14.786252   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:17.414490   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:07:17.414490   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:18.427212   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:20.658738   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:20.658738   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:20.658738   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:23.210510   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:07:23.210838   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:24.211631   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:26.458382   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:26.458382   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:26.458382   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:29.106849   12908 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:07:29.107873   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:30.119812   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:32.364370   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:32.365062   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:32.365062   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:35.073419   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:07:35.073419   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:35.073419   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:37.226916   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:37.227731   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:37.227796   12908 machine.go:94] provisionDockerMachine start ...
	I0421 20:07:37.227796   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:39.446747   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:39.446747   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:39.446747   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:42.060745   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:07:42.060745   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:42.067551   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:07:42.077648   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.108 22 <nil> <nil>}
	I0421 20:07:42.078722   12908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 20:07:42.214771   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 20:07:42.214771   12908 buildroot.go:166] provisioning hostname "multinode-152500-m02"
	I0421 20:07:42.214771   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:44.392503   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:44.392503   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:44.392503   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:47.060023   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:07:47.060023   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:47.069090   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:07:47.069402   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.108 22 <nil> <nil>}
	I0421 20:07:47.069402   12908 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-152500-m02 && echo "multinode-152500-m02" | sudo tee /etc/hostname
	I0421 20:07:47.241135   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-152500-m02
	
	I0421 20:07:47.241135   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:49.399680   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:49.399872   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:49.399872   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:51.998204   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:07:51.998204   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:52.005661   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:07:52.005843   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.108 22 <nil> <nil>}
	I0421 20:07:52.005843   12908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-152500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-152500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-152500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:07:52.170351   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:07:52.170460   12908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 20:07:52.170526   12908 buildroot.go:174] setting up certificates
	I0421 20:07:52.170526   12908 provision.go:84] configureAuth start
	I0421 20:07:52.170526   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:54.350922   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:54.351160   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:54.351246   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:07:56.998779   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:07:56.998779   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:56.998779   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:07:59.183917   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:07:59.183917   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:07:59.183988   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:01.856581   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:01.856581   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:01.856581   12908 provision.go:143] copyHostCerts
	I0421 20:08:01.857134   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 20:08:01.857449   12908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 20:08:01.857449   12908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 20:08:01.857905   12908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 20:08:01.858534   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 20:08:01.859150   12908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 20:08:01.859150   12908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 20:08:01.859150   12908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 20:08:01.860513   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 20:08:01.860513   12908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 20:08:01.860513   12908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 20:08:01.861139   12908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 20:08:01.861911   12908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-152500-m02 san=[127.0.0.1 172.27.195.108 localhost minikube multinode-152500-m02]
	I0421 20:08:02.045419   12908 provision.go:177] copyRemoteCerts
	I0421 20:08:02.059351   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:08:02.059351   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:04.227835   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:04.227835   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:04.228779   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:06.854549   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:06.854549   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:06.855876   12908 sshutil.go:53] new ssh client: &{IP:172.27.195.108 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:08:06.975858   12908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9163638s)
	I0421 20:08:06.975973   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 20:08:06.975973   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:08:07.032270   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 20:08:07.032753   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0421 20:08:07.084625   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 20:08:07.084769   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 20:08:07.144880   12908 provision.go:87] duration metric: took 14.9742454s to configureAuth
	I0421 20:08:07.144880   12908 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:08:07.145618   12908 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:08:07.145618   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:09.275339   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:09.275339   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:09.275339   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:11.885630   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:11.886616   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:11.894028   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:08:11.894187   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.108 22 <nil> <nil>}
	I0421 20:08:11.894731   12908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 20:08:12.046006   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 20:08:12.046006   12908 buildroot.go:70] root file system type: tmpfs
	I0421 20:08:12.046006   12908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 20:08:12.046549   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:14.230845   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:14.231420   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:14.231420   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:16.847078   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:16.847078   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:16.853678   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:08:16.854069   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.108 22 <nil> <nil>}
	I0421 20:08:16.854069   12908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.198.190"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 20:08:17.028136   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.198.190
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 20:08:17.028136   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:19.171630   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:19.171630   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:19.172242   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:21.793216   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:21.793897   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:21.800115   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:08:21.800621   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.108 22 <nil> <nil>}
	I0421 20:08:21.800696   12908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 20:08:24.103599   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 20:08:24.103744   12908 machine.go:97] duration metric: took 46.8756058s to provisionDockerMachine
	I0421 20:08:24.103744   12908 client.go:171] duration metric: took 1m59.1107459s to LocalClient.Create
	I0421 20:08:24.103856   12908 start.go:167] duration metric: took 1m59.110923s to libmachine.API.Create "multinode-152500"
	I0421 20:08:24.103980   12908 start.go:293] postStartSetup for "multinode-152500-m02" (driver="hyperv")
	I0421 20:08:24.103980   12908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:08:24.119136   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:08:24.119136   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:26.310601   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:26.311293   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:26.311293   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:28.991972   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:28.991972   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:28.992338   12908 sshutil.go:53] new ssh client: &{IP:172.27.195.108 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:08:29.113138   12908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.993966s)
	I0421 20:08:29.127388   12908 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:08:29.134823   12908 command_runner.go:130] > NAME=Buildroot
	I0421 20:08:29.135195   12908 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 20:08:29.135195   12908 command_runner.go:130] > ID=buildroot
	I0421 20:08:29.135195   12908 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 20:08:29.135195   12908 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 20:08:29.135301   12908 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:08:29.135454   12908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 20:08:29.135888   12908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 20:08:29.136981   12908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 20:08:29.136981   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 20:08:29.151962   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:08:29.172650   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 20:08:29.223943   12908 start.go:296] duration metric: took 5.1199255s for postStartSetup
	I0421 20:08:29.226680   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:31.406024   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:31.406024   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:31.406801   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:34.008780   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:34.009679   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:34.009679   12908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:08:34.011823   12908 start.go:128] duration metric: took 2m9.0222647s to createHost
	I0421 20:08:34.012348   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:36.180304   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:36.180304   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:36.180304   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:38.776846   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:38.776846   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:38.784778   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:08:38.785312   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.108 22 <nil> <nil>}
	I0421 20:08:38.785473   12908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 20:08:38.936400   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713730118.949029270
	
	I0421 20:08:38.936400   12908 fix.go:216] guest clock: 1713730118.949029270
	I0421 20:08:38.936400   12908 fix.go:229] Guest: 2024-04-21 20:08:38.94902927 +0000 UTC Remote: 2024-04-21 20:08:34.0118237 +0000 UTC m=+353.204319101 (delta=4.93720557s)
	I0421 20:08:38.936598   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:41.107697   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:41.107697   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:41.107809   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:43.715463   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:43.715679   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:43.720861   12908 main.go:141] libmachine: Using SSH client type: native
	I0421 20:08:43.721515   12908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.195.108 22 <nil> <nil>}
	I0421 20:08:43.721515   12908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713730118
	I0421 20:08:43.871739   12908 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 20:08:38 UTC 2024
	
	I0421 20:08:43.871905   12908 fix.go:236] clock set: Sun Apr 21 20:08:38 UTC 2024
	 (err=<nil>)
	I0421 20:08:43.871905   12908 start.go:83] releasing machines lock for "multinode-152500-m02", held for 2m18.8830316s
	I0421 20:08:43.871905   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:46.062448   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:46.062448   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:46.062523   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:48.729118   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:48.729893   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:48.733887   12908 out.go:177] * Found network options:
	I0421 20:08:48.737038   12908 out.go:177]   - NO_PROXY=172.27.198.190
	W0421 20:08:48.739565   12908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 20:08:48.742081   12908 out.go:177]   - NO_PROXY=172.27.198.190
	W0421 20:08:48.744698   12908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 20:08:48.746116   12908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 20:08:48.749499   12908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:08:48.749499   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:48.760406   12908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 20:08:48.760406   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:08:50.963334   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:50.963897   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:50.963897   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:50.969364   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:08:50.969364   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:50.969488   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:08:53.691629   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:53.692649   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:53.693151   12908 sshutil.go:53] new ssh client: &{IP:172.27.195.108 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:08:53.721551   12908 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:08:53.721551   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:08:53.722088   12908 sshutil.go:53] new ssh client: &{IP:172.27.195.108 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:08:53.798591   12908 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0421 20:08:53.799296   12908 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0388538s)
	W0421 20:08:53.799406   12908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:08:53.811643   12908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:08:53.895743   12908 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 20:08:53.895743   12908 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.146207s)
	I0421 20:08:53.895743   12908 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0421 20:08:53.895963   12908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:08:53.895963   12908 start.go:494] detecting cgroup driver to use...
	I0421 20:08:53.896192   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:08:53.936095   12908 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0421 20:08:53.952644   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 20:08:53.994208   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 20:08:54.015194   12908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 20:08:54.029747   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 20:08:54.065009   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:08:54.102229   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 20:08:54.137098   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:08:54.172827   12908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:08:54.209073   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 20:08:54.245345   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 20:08:54.287043   12908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 20:08:54.323060   12908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:08:54.344249   12908 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 20:08:54.362065   12908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:08:54.399544   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:08:54.632125   12908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 20:08:54.667078   12908 start.go:494] detecting cgroup driver to use...
	I0421 20:08:54.682089   12908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 20:08:54.708049   12908 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0421 20:08:54.708049   12908 command_runner.go:130] > [Unit]
	I0421 20:08:54.708049   12908 command_runner.go:130] > Description=Docker Application Container Engine
	I0421 20:08:54.708148   12908 command_runner.go:130] > Documentation=https://docs.docker.com
	I0421 20:08:54.708148   12908 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0421 20:08:54.708148   12908 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0421 20:08:54.708148   12908 command_runner.go:130] > StartLimitBurst=3
	I0421 20:08:54.708209   12908 command_runner.go:130] > StartLimitIntervalSec=60
	I0421 20:08:54.708251   12908 command_runner.go:130] > [Service]
	I0421 20:08:54.708251   12908 command_runner.go:130] > Type=notify
	I0421 20:08:54.708308   12908 command_runner.go:130] > Restart=on-failure
	I0421 20:08:54.708308   12908 command_runner.go:130] > Environment=NO_PROXY=172.27.198.190
	I0421 20:08:54.708308   12908 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0421 20:08:54.708308   12908 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0421 20:08:54.708308   12908 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0421 20:08:54.708308   12908 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0421 20:08:54.708308   12908 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0421 20:08:54.708308   12908 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0421 20:08:54.708308   12908 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0421 20:08:54.708308   12908 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0421 20:08:54.708308   12908 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0421 20:08:54.708308   12908 command_runner.go:130] > ExecStart=
	I0421 20:08:54.708308   12908 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0421 20:08:54.708308   12908 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0421 20:08:54.708308   12908 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0421 20:08:54.708308   12908 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0421 20:08:54.708308   12908 command_runner.go:130] > LimitNOFILE=infinity
	I0421 20:08:54.708308   12908 command_runner.go:130] > LimitNPROC=infinity
	I0421 20:08:54.708308   12908 command_runner.go:130] > LimitCORE=infinity
	I0421 20:08:54.708308   12908 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0421 20:08:54.708308   12908 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0421 20:08:54.708308   12908 command_runner.go:130] > TasksMax=infinity
	I0421 20:08:54.708308   12908 command_runner.go:130] > TimeoutStartSec=0
	I0421 20:08:54.708308   12908 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0421 20:08:54.708308   12908 command_runner.go:130] > Delegate=yes
	I0421 20:08:54.708308   12908 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0421 20:08:54.708308   12908 command_runner.go:130] > KillMode=process
	I0421 20:08:54.708308   12908 command_runner.go:130] > [Install]
	I0421 20:08:54.708308   12908 command_runner.go:130] > WantedBy=multi-user.target
	I0421 20:08:54.725395   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:08:54.764384   12908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:08:54.811802   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:08:54.851412   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:08:54.888997   12908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 20:08:54.962550   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:08:54.988578   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:08:55.025224   12908 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0421 20:08:55.039961   12908 ssh_runner.go:195] Run: which cri-dockerd
	I0421 20:08:55.047651   12908 command_runner.go:130] > /usr/bin/cri-dockerd
	I0421 20:08:55.062322   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 20:08:55.086733   12908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 20:08:55.141437   12908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 20:08:55.367792   12908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 20:08:55.585743   12908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 20:08:55.585873   12908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 20:08:55.641056   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:08:55.860510   12908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 20:08:58.445992   12908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5854013s)
	I0421 20:08:58.460740   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 20:08:58.501067   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:08:58.543716   12908 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 20:08:58.774429   12908 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 20:08:58.995729   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:08:59.231795   12908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 20:08:59.277752   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:08:59.320091   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:08:59.548647   12908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 20:08:59.686647   12908 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 20:08:59.700691   12908 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 20:08:59.713849   12908 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0421 20:08:59.713849   12908 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 20:08:59.713849   12908 command_runner.go:130] > Device: 0,22	Inode: 883         Links: 1
	I0421 20:08:59.713849   12908 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0421 20:08:59.713849   12908 command_runner.go:130] > Access: 2024-04-21 20:08:59.593309291 +0000
	I0421 20:08:59.713849   12908 command_runner.go:130] > Modify: 2024-04-21 20:08:59.593309291 +0000
	I0421 20:08:59.713849   12908 command_runner.go:130] > Change: 2024-04-21 20:08:59.600309296 +0000
	I0421 20:08:59.713849   12908 command_runner.go:130] >  Birth: -
	I0421 20:08:59.713849   12908 start.go:562] Will wait 60s for crictl version
	I0421 20:08:59.728343   12908 ssh_runner.go:195] Run: which crictl
	I0421 20:08:59.735371   12908 command_runner.go:130] > /usr/bin/crictl
	I0421 20:08:59.748798   12908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:08:59.813298   12908 command_runner.go:130] > Version:  0.1.0
	I0421 20:08:59.813298   12908 command_runner.go:130] > RuntimeName:  docker
	I0421 20:08:59.813298   12908 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0421 20:08:59.813298   12908 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 20:08:59.813298   12908 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 20:08:59.823320   12908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:08:59.862358   12908 command_runner.go:130] > 26.0.1
	I0421 20:08:59.872300   12908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:08:59.904620   12908 command_runner.go:130] > 26.0.1
	I0421 20:08:59.910605   12908 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 20:08:59.913018   12908 out.go:177]   - env NO_PROXY=172.27.198.190
	I0421 20:08:59.917171   12908 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 20:08:59.921711   12908 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 20:08:59.921711   12908 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 20:08:59.921711   12908 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 20:08:59.921711   12908 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 20:08:59.925214   12908 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 20:08:59.925262   12908 ip.go:210] interface addr: 172.27.192.1/20
	I0421 20:08:59.940452   12908 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 20:08:59.949579   12908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:08:59.973513   12908 mustload.go:65] Loading cluster: multinode-152500
	I0421 20:08:59.974283   12908 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:08:59.975040   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:09:02.104854   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:09:02.105127   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:09:02.105127   12908 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:09:02.105471   12908 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500 for IP: 172.27.195.108
	I0421 20:09:02.105471   12908 certs.go:194] generating shared ca certs ...
	I0421 20:09:02.105471   12908 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:09:02.106090   12908 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 20:09:02.106826   12908 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 20:09:02.106904   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 20:09:02.106904   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 20:09:02.107525   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 20:09:02.107643   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 20:09:02.108354   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 20:09:02.108419   12908 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 20:09:02.108419   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 20:09:02.108954   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 20:09:02.109183   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 20:09:02.109183   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 20:09:02.109935   12908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 20:09:02.109935   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:09:02.109935   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 20:09:02.110458   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 20:09:02.110688   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:09:02.167236   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:09:02.216819   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:09:02.267297   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:09:02.320181   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:09:02.369614   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 20:09:02.420698   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 20:09:02.481670   12908 ssh_runner.go:195] Run: openssl version
	I0421 20:09:02.491318   12908 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 20:09:02.505721   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 20:09:02.542613   12908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 20:09:02.549269   12908 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:09:02.549597   12908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:09:02.568960   12908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 20:09:02.579100   12908 command_runner.go:130] > 51391683
	I0421 20:09:02.593528   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 20:09:02.628713   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 20:09:02.663546   12908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 20:09:02.669880   12908 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:09:02.669880   12908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:09:02.683001   12908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 20:09:02.691955   12908 command_runner.go:130] > 3ec20f2e
	I0421 20:09:02.706373   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:09:02.741836   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:09:02.778680   12908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:09:02.786045   12908 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:09:02.786045   12908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:09:02.799628   12908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:09:02.808574   12908 command_runner.go:130] > b5213941
	I0421 20:09:02.825556   12908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:09:02.864825   12908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:09:02.879604   12908 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:09:02.879816   12908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:09:02.879816   12908 kubeadm.go:928] updating node {m02 172.27.195.108 8443 v1.30.0 docker false true} ...
	I0421 20:09:02.880368   12908 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-152500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.195.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:09:02.894223   12908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:09:02.914243   12908 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0421 20:09:02.915274   12908 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0421 20:09:02.928623   12908 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0421 20:09:02.950529   12908 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0421 20:09:02.950529   12908 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0421 20:09:02.950529   12908 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0421 20:09:02.950714   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 20:09:02.950781   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 20:09:02.967666   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:09:02.967666   12908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0421 20:09:02.969186   12908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0421 20:09:02.991938   12908 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 20:09:02.991938   12908 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 20:09:02.992362   12908 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 20:09:02.992362   12908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0421 20:09:02.992362   12908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0421 20:09:02.992500   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0421 20:09:02.992500   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0421 20:09:03.006782   12908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0421 20:09:03.049321   12908 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 20:09:03.053989   12908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0421 20:09:03.054682   12908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0421 20:09:04.344872   12908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0421 20:09:04.364669   12908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0421 20:09:04.407450   12908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:09:04.467407   12908 ssh_runner.go:195] Run: grep 172.27.198.190	control-plane.minikube.internal$ /etc/hosts
	I0421 20:09:04.475875   12908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.198.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:09:04.516169   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:09:04.756176   12908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:09:04.788348   12908 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:09:04.789157   12908 start.go:316] joinCluster: &{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-152500
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:09:04.789349   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 20:09:04.789413   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:09:06.997427   12908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:09:06.997427   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:09:06.997893   12908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:09:09.603651   12908 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:09:09.603651   12908 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:09:09.604317   12908 sshutil.go:53] new ssh client: &{IP:172.27.198.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:09:09.840061   12908 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mb9f4v.m42ilnhdct9tinjo --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 
	I0421 20:09:09.841441   12908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0518835s)
	I0421 20:09:09.841441   12908 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:09:09.841621   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mb9f4v.m42ilnhdct9tinjo --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-152500-m02"
	I0421 20:09:10.067420   12908 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:09:11.429985   12908 command_runner.go:130] > [preflight] Running pre-flight checks
	I0421 20:09:11.429985   12908 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0421 20:09:11.430163   12908 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0421 20:09:11.430163   12908 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:09:11.430163   12908 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:09:11.430163   12908 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0421 20:09:11.430163   12908 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:09:11.430163   12908 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002195347s
	I0421 20:09:11.430163   12908 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0421 20:09:11.430282   12908 command_runner.go:130] > This node has joined the cluster:
	I0421 20:09:11.430282   12908 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0421 20:09:11.430343   12908 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0421 20:09:11.430343   12908 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0421 20:09:11.430343   12908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mb9f4v.m42ilnhdct9tinjo --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-152500-m02": (1.5887104s)
	I0421 20:09:11.430459   12908 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 20:09:11.887365   12908 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0421 20:09:11.902873   12908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-152500-m02 minikube.k8s.io/updated_at=2024_04_21T20_09_11_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=multinode-152500 minikube.k8s.io/primary=false
	I0421 20:09:12.096818   12908 command_runner.go:130] > node/multinode-152500-m02 labeled
	I0421 20:09:12.096818   12908 start.go:318] duration metric: took 7.3076085s to joinCluster
	I0421 20:09:12.096818   12908 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:09:12.101794   12908 out.go:177] * Verifying Kubernetes components...
	I0421 20:09:12.097420   12908 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:09:12.119084   12908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:09:12.376508   12908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:09:12.404072   12908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:09:12.404923   12908 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.198.190:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:09:12.405645   12908 node_ready.go:35] waiting up to 6m0s for node "multinode-152500-m02" to be "Ready" ...
	I0421 20:09:12.406268   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:12.406268   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:12.406268   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:12.406268   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:12.418457   12908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0421 20:09:12.418457   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:12.419434   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:12.419434   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:12.419434   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:12 GMT
	I0421 20:09:12.419474   12908 round_trippers.go:580]     Audit-Id: da60135b-4cea-4657-9e0e-fc139867d7d1
	I0421 20:09:12.419474   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:12.419474   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:12.419474   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:12.419557   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:12.919062   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:12.919062   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:12.919062   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:12.919062   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:12.923655   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:12.923853   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:12.923853   12908 round_trippers.go:580]     Audit-Id: 3a85bfca-1ec7-44e9-b164-4d8fea6ac93b
	I0421 20:09:12.923853   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:12.923853   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:12.923853   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:12.923853   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:12.923853   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:12.923853   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:12 GMT
	I0421 20:09:12.924074   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:13.415868   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:13.415868   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:13.416160   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:13.416160   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:13.419531   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:13.420437   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:13.420437   12908 round_trippers.go:580]     Audit-Id: 7b6e2d92-d54d-4005-8a91-0bcefe7e07d6
	I0421 20:09:13.420437   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:13.420437   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:13.420437   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:13.420437   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:13.420437   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:13.420437   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:13 GMT
	I0421 20:09:13.420732   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:13.916516   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:13.916828   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:13.916828   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:13.916828   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:13.920657   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:13.921109   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:13.921109   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:13.921109   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:13.921109   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:13 GMT
	I0421 20:09:13.921109   12908 round_trippers.go:580]     Audit-Id: 40a1a4fb-2fc6-4e81-9da6-4a6c13ab463e
	I0421 20:09:13.921109   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:13.921109   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:13.921109   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:13.921327   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:14.416722   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:14.416956   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:14.417017   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:14.417017   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:14.422627   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:09:14.422627   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:14.423421   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:14.423421   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:14.423421   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:14.423421   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:14.423421   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:14.423421   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:14 GMT
	I0421 20:09:14.423421   12908 round_trippers.go:580]     Audit-Id: f27aba8c-98de-4d35-be93-32307f7a1035
	I0421 20:09:14.423790   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:14.425121   12908 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:09:14.914520   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:14.914520   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:14.914520   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:14.914793   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:14.918457   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:14.918457   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:14.918457   12908 round_trippers.go:580]     Audit-Id: 9aa575ae-49c6-4312-a792-9ca6c6c45341
	I0421 20:09:14.918457   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:14.918457   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:14.918457   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:14.918457   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:14.918457   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:14.918457   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:14 GMT
	I0421 20:09:14.918908   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:15.413451   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:15.413451   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:15.413451   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:15.413451   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:15.419903   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:09:15.420028   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:15.420028   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:15.420028   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:15.420028   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:15 GMT
	I0421 20:09:15.420116   12908 round_trippers.go:580]     Audit-Id: 27dc8510-4599-4844-850f-904d4c7bede8
	I0421 20:09:15.420155   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:15.420155   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:15.420237   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:15.420411   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:15.916390   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:15.916390   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:15.916390   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:15.916390   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:15.919981   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:15.919981   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:15.919981   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:15.919981   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:15.919981   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:15.919981   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:15 GMT
	I0421 20:09:15.919981   12908 round_trippers.go:580]     Audit-Id: 28d5fcc6-ea61-4770-9dcb-a8c585a0b7e6
	I0421 20:09:15.919981   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:15.919981   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:15.920630   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:16.420865   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:16.420865   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:16.420865   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:16.420865   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:16.427437   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:09:16.427437   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:16.427437   12908 round_trippers.go:580]     Audit-Id: 5482ec36-c9c2-4269-b6cc-9c0d08f09e35
	I0421 20:09:16.427437   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:16.427437   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:16.427437   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:16.427437   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:16.427437   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:16.427437   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:16 GMT
	I0421 20:09:16.427437   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:16.428119   12908 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:09:16.912098   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:16.912162   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:16.912162   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:16.912162   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:16.918218   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:09:16.918218   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:16.918218   12908 round_trippers.go:580]     Audit-Id: 09c25363-c0d6-414a-bff4-d61ab4f3c234
	I0421 20:09:16.918218   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:16.918218   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:16.918218   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:16.918218   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:16.918757   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:16.918757   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:16 GMT
	I0421 20:09:16.918908   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:17.415713   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:17.415920   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:17.415920   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:17.415920   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:17.421819   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:09:17.421860   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:17.421860   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:17.421916   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:17.421916   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:17.421916   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:17.421950   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:17 GMT
	I0421 20:09:17.421950   12908 round_trippers.go:580]     Audit-Id: 8640f7f0-3530-4628-a5af-a1c7eb1f95bc
	I0421 20:09:17.421950   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:17.422132   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:17.907748   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:17.907748   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:17.907748   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:17.907748   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:17.916102   12908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:09:17.916102   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:17.916102   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:17.916102   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:17.916102   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:17 GMT
	I0421 20:09:17.916102   12908 round_trippers.go:580]     Audit-Id: cd7f1b9c-d1ef-498e-ba5f-4fece49e7fb4
	I0421 20:09:17.916102   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:17.916102   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:17.916102   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:17.916102   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:18.416192   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:18.416192   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:18.416192   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:18.416321   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:18.420037   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:18.420140   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:18.420140   12908 round_trippers.go:580]     Audit-Id: 99b6f9ae-e216-419c-a36d-589dcb8e5202
	I0421 20:09:18.420140   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:18.420140   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:18.420140   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:18.420140   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:18.420140   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:18.420140   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:18 GMT
	I0421 20:09:18.420140   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:18.906893   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:18.906893   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:18.906893   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:18.906893   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:18.911533   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:18.911585   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:18.911585   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:18.911618   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:18.911618   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:18.911618   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:18 GMT
	I0421 20:09:18.911618   12908 round_trippers.go:580]     Audit-Id: 728a633d-6f23-422f-a688-0fbfe687435c
	I0421 20:09:18.911654   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:18.911654   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:18.911855   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:18.912441   12908 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:09:19.411169   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:19.411169   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:19.411169   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:19.411169   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:19.416254   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:19.416254   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:19.416254   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:19.416254   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:19.416254   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:19 GMT
	I0421 20:09:19.416254   12908 round_trippers.go:580]     Audit-Id: 1e29125e-5ae3-42b6-8202-99f15802db5e
	I0421 20:09:19.416342   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:19.416342   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:19.416342   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:19.416437   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:19.918565   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:19.918565   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:19.918801   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:19.918801   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:19.922129   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:19.922129   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:19.922719   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:19 GMT
	I0421 20:09:19.922719   12908 round_trippers.go:580]     Audit-Id: 183c6fcc-153f-4cc6-965c-ca8479117f32
	I0421 20:09:19.922719   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:19.922719   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:19.922719   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:19.922719   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:19.922719   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:19.922950   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:20.408695   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:20.408748   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:20.408748   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:20.408801   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:20.412502   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:20.412502   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:20.412502   12908 round_trippers.go:580]     Audit-Id: 34cadb9d-32a7-4041-a8c5-d85940235ba9
	I0421 20:09:20.412502   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:20.412502   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:20.412502   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:20.412502   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:20.412502   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:20.412502   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:20 GMT
	I0421 20:09:20.412502   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:20.908113   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:20.908113   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:20.908113   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:20.908113   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:20.912584   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:20.912584   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:20.912584   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:20 GMT
	I0421 20:09:20.912663   12908 round_trippers.go:580]     Audit-Id: d4689a47-9657-48d8-9eb2-391e8cd4bdd5
	I0421 20:09:20.912663   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:20.912663   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:20.912663   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:20.912663   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:20.912663   12908 round_trippers.go:580]     Content-Length: 4030
	I0421 20:09:20.912854   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"617","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0421 20:09:20.913260   12908 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:09:21.407443   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:21.407538   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:21.407538   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:21.407538   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:21.556280   12908 round_trippers.go:574] Response Status: 200 OK in 148 milliseconds
	I0421 20:09:21.556598   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:21.556598   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:21.556598   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:21.556598   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:21 GMT
	I0421 20:09:21.556598   12908 round_trippers.go:580]     Audit-Id: e011dbb7-ad80-435d-b7ac-ba019d6f04c7
	I0421 20:09:21.556598   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:21.556598   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:21.557248   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:21.912495   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:21.912587   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:21.912633   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:21.912633   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:21.916981   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:21.916981   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:21.916981   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:21.916981   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:21.916981   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:21 GMT
	I0421 20:09:21.916981   12908 round_trippers.go:580]     Audit-Id: 76079878-ce36-4039-b437-278d7a80e1f9
	I0421 20:09:21.916981   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:21.916981   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:21.916981   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:22.416730   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:22.416789   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:22.416789   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:22.416789   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:22.421365   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:22.421866   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:22.421866   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:22.421866   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:22.421866   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:22 GMT
	I0421 20:09:22.421866   12908 round_trippers.go:580]     Audit-Id: f7e0d13e-45bc-4db9-9965-b90fab47d87f
	I0421 20:09:22.421866   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:22.421866   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:22.422090   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:22.916828   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:22.916935   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:22.916968   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:22.916968   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:22.921631   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:22.921631   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:22.921631   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:22.921631   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:22 GMT
	I0421 20:09:22.921631   12908 round_trippers.go:580]     Audit-Id: f75c1c64-b83d-42f6-a62c-917cd510288c
	I0421 20:09:22.921631   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:22.921631   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:22.921631   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:22.922619   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:22.923319   12908 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:09:23.407494   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:23.407494   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:23.407494   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:23.407494   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:23.411531   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:23.411531   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:23.411531   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:23.411531   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:23.411531   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:23 GMT
	I0421 20:09:23.411531   12908 round_trippers.go:580]     Audit-Id: af5228a3-718d-4d06-843f-2952ae17ceb0
	I0421 20:09:23.411531   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:23.411531   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:23.412566   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:23.914919   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:23.915420   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:23.915420   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:23.915420   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:23.919787   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:23.919787   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:23.919787   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:23.919871   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:23.919871   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:23 GMT
	I0421 20:09:23.919871   12908 round_trippers.go:580]     Audit-Id: 5b770833-71a8-441f-a04d-2c27d2ff1146
	I0421 20:09:23.919871   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:23.919871   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:23.920357   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:24.408794   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:24.408794   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:24.408794   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:24.408794   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:24.413799   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:09:24.414104   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:24.414104   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:24 GMT
	I0421 20:09:24.414104   12908 round_trippers.go:580]     Audit-Id: a0539839-6dbd-4008-b36e-b4a02ba789bc
	I0421 20:09:24.414104   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:24.414104   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:24.414104   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:24.414203   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:24.414747   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:24.919446   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:24.919446   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:24.919527   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:24.919527   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:24.923455   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:24.923455   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:24.923455   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:24.923455   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:24 GMT
	I0421 20:09:24.923455   12908 round_trippers.go:580]     Audit-Id: 9fbe823d-c352-4080-a071-de7696d8d9d2
	I0421 20:09:24.923455   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:24.923455   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:24.923455   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:24.924053   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:24.924701   12908 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:09:25.410230   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:25.410292   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:25.410292   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:25.410292   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:25.414952   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:25.414952   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:25.414952   12908 round_trippers.go:580]     Audit-Id: b21c07de-2428-4d59-8408-271c39e0ac84
	I0421 20:09:25.414952   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:25.414952   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:25.414952   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:25.414952   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:25.414952   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:25 GMT
	I0421 20:09:25.415097   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:25.918182   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:25.918182   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:25.918182   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:25.918182   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:25.923472   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:09:25.923472   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:25.923472   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:25.923472   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:25.923472   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:25 GMT
	I0421 20:09:25.923472   12908 round_trippers.go:580]     Audit-Id: a77be9f5-bcff-4845-862a-7c63aac803f7
	I0421 20:09:25.923472   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:25.923472   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:25.923734   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:26.411899   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:26.412178   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:26.412238   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:26.412238   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:26.416820   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:26.416820   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:26.416820   12908 round_trippers.go:580]     Audit-Id: c9c0d5e4-ad7e-4dfc-9409-94256da860b5
	I0421 20:09:26.417280   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:26.417280   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:26.417280   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:26.417280   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:26.417280   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:26 GMT
	I0421 20:09:26.417576   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:26.917937   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:26.918025   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:26.918025   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:26.918025   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:26.924393   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:09:26.924393   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:26.924393   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:26 GMT
	I0421 20:09:26.924393   12908 round_trippers.go:580]     Audit-Id: 73819857-0283-4918-901d-b421003daaa9
	I0421 20:09:26.924393   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:26.924393   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:26.924393   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:26.924920   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:26.925680   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:26.925786   12908 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:09:27.416961   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:27.416961   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:27.416961   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:27.416961   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:27.422210   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:27.422210   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:27.422286   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:27.422286   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:27.422286   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:27.422286   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:27 GMT
	I0421 20:09:27.422286   12908 round_trippers.go:580]     Audit-Id: fcc9e62c-f3fd-465f-b657-739f3628e545
	I0421 20:09:27.422286   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:27.422519   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:27.919566   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:27.919566   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:27.919566   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:27.919566   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:27.922982   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:27.922982   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:27.922982   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:27.924156   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:27.924156   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:27 GMT
	I0421 20:09:27.924156   12908 round_trippers.go:580]     Audit-Id: b1cc3eaa-3e29-43ae-bf24-bfd3ffddb35b
	I0421 20:09:27.924156   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:27.924156   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:27.924419   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:28.418218   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:28.418526   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:28.418526   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:28.418526   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:28.425369   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:09:28.425369   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:28.425369   12908 round_trippers.go:580]     Audit-Id: 82d31b57-6874-4c2b-9c4c-346f15616123
	I0421 20:09:28.425369   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:28.425369   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:28.425369   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:28.425369   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:28.425369   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:28 GMT
	I0421 20:09:28.426181   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:28.919963   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:28.920038   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:28.920038   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:28.920038   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:28.925266   12908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:09:28.925266   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:28.925266   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:28.925390   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:28.925390   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:28.925390   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:28.925390   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:28 GMT
	I0421 20:09:28.925390   12908 round_trippers.go:580]     Audit-Id: 3f711f13-079d-41f2-87dc-88efe1ad4e30
	I0421 20:09:28.925736   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:28.926086   12908 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:09:29.416362   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:29.416600   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.416600   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.416600   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.420860   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:29.420860   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.420860   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.420860   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.420860   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.420860   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.420860   12908 round_trippers.go:580]     Audit-Id: de8d13a0-bedb-4445-898d-ab58a5cda835
	I0421 20:09:29.420860   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.421442   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"628","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0421 20:09:29.914584   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:29.914584   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.914960   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.914960   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.919472   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:29.919472   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.919472   12908 round_trippers.go:580]     Audit-Id: 2dd1b010-b838-42e6-902c-17ab9b492692
	I0421 20:09:29.919472   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.919472   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.919472   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.919472   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.919472   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.920070   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"650","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0421 20:09:29.920148   12908 node_ready.go:49] node "multinode-152500-m02" has status "Ready":"True"
	I0421 20:09:29.920148   12908 node_ready.go:38] duration metric: took 17.514375s for node "multinode-152500-m02" to be "Ready" ...
	I0421 20:09:29.920148   12908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:09:29.920148   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods
	I0421 20:09:29.920148   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.920148   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.920148   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.929459   12908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 20:09:29.929459   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.929459   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.929459   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.929459   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.929459   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.929459   12908 round_trippers.go:580]     Audit-Id: 7ec95aec-3b42-4420-a666-c771bf3705f2
	I0421 20:09:29.929459   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.931847   12908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"650"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0421 20:09:29.936078   12908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:29.936329   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:09:29.936376   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.936376   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.936408   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.939214   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:09:29.939214   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.939214   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.939214   12908 round_trippers.go:580]     Audit-Id: a9f4135f-8161-43e7-b580-2dd8d8310c13
	I0421 20:09:29.939214   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.939214   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.939214   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.939214   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.940534   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0421 20:09:29.941118   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:09:29.941118   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.941118   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.941118   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.944478   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:29.944813   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.944813   12908 round_trippers.go:580]     Audit-Id: 84bb1baf-3ee9-4b8b-a2f1-ee9470c60469
	I0421 20:09:29.944813   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.944813   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.944813   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.944813   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.944813   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.945108   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"455","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0421 20:09:29.945108   12908 pod_ready.go:92] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:09:29.945651   12908 pod_ready.go:81] duration metric: took 9.0302ms for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:29.945651   12908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:29.945956   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-152500
	I0421 20:09:29.946054   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.946054   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.946054   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.949643   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:29.949815   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.949815   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.949815   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.949815   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.949815   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.949866   12908 round_trippers.go:580]     Audit-Id: 014103ac-9423-4a50-ab84-c5650ac4dc7d
	I0421 20:09:29.949906   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.950207   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"e5f399f5-b04e-4ac1-8646-d103d2d8f74a","resourceVersion":"322","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.198.190:2379","kubernetes.io/config.hash":"5b332f12d2b025e34ff6a19060e65329","kubernetes.io/config.mirror":"5b332f12d2b025e34ff6a19060e65329","kubernetes.io/config.seen":"2024-04-21T20:05:53.333716613Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0421 20:09:29.950715   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:09:29.950756   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.950756   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.950756   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.957266   12908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:09:29.957266   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.957266   12908 round_trippers.go:580]     Audit-Id: cae24524-a6d5-4b49-b420-17e8823ec07a
	I0421 20:09:29.957266   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.957266   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.957266   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.957266   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.957266   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.957565   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"455","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0421 20:09:29.957565   12908 pod_ready.go:92] pod "etcd-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:09:29.957565   12908 pod_ready.go:81] duration metric: took 11.9141ms for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:29.957565   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:29.958195   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-152500
	I0421 20:09:29.958195   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.958195   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.958195   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.961027   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:09:29.961096   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.961096   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.961096   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.961096   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.961096   12908 round_trippers.go:580]     Audit-Id: ca80dd26-d7d7-43f7-b7ea-25edec67cbb3
	I0421 20:09:29.961096   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.961096   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.961259   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-152500","namespace":"kube-system","uid":"52744df0-77af-4caf-b69d-af2789c25eab","resourceVersion":"324","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.198.190:8443","kubernetes.io/config.hash":"795735df3eb25834ddaf2db596e59a82","kubernetes.io/config.mirror":"795735df3eb25834ddaf2db596e59a82","kubernetes.io/config.seen":"2024-04-21T20:05:53.333722413Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0421 20:09:29.961259   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:09:29.961898   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.961898   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.961898   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.964229   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:09:29.964229   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.964229   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.964229   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.964229   12908 round_trippers.go:580]     Audit-Id: d7bceac4-1c2f-4c49-93d3-d48ca9249478
	I0421 20:09:29.964229   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.965270   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.965270   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.965490   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"455","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0421 20:09:29.965872   12908 pod_ready.go:92] pod "kube-apiserver-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:09:29.965872   12908 pod_ready.go:81] duration metric: took 8.3065ms for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:29.965872   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:29.966016   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:09:29.966016   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.966016   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.966016   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.968763   12908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:09:29.968763   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.968763   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.968763   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.968763   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.968763   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.968763   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.968763   12908 round_trippers.go:580]     Audit-Id: 424b015c-d714-4455-ab32-11916554a7e0
	I0421 20:09:29.968763   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"330","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0421 20:09:29.968763   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:09:29.968763   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:29.968763   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:29.968763   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:29.972483   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:29.972483   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:29.972483   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:29.972483   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:29.972483   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:29.972483   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:29.972483   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:29 GMT
	I0421 20:09:29.972483   12908 round_trippers.go:580]     Audit-Id: eb24c646-4670-4fd5-afe7-ea422976a6d0
	I0421 20:09:29.972483   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"455","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0421 20:09:29.972483   12908 pod_ready.go:92] pod "kube-controller-manager-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:09:29.972483   12908 pod_ready.go:81] duration metric: took 6.6109ms for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:29.972483   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:30.120562   12908 request.go:629] Waited for 147.8229ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:09:30.120652   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:09:30.120652   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:30.120652   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:30.120748   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:30.125517   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:30.125517   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:30.125517   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:30.125517   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:30 GMT
	I0421 20:09:30.125517   12908 round_trippers.go:580]     Audit-Id: 876b1239-eb27-4e07-9f5d-b356977d3b3c
	I0421 20:09:30.125517   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:30.125517   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:30.125517   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:30.126739   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9zlm5","generateName":"kube-proxy-","namespace":"kube-system","uid":"61ba111b-28e9-40db-943d-22a595fdc27e","resourceVersion":"633","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0421 20:09:30.322445   12908 request.go:629] Waited for 194.9028ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:30.322798   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:09:30.322798   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:30.322883   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:30.322883   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:30.326172   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:30.326172   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:30.326172   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:30.326172   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:30.326172   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:30.326172   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:30 GMT
	I0421 20:09:30.326172   12908 round_trippers.go:580]     Audit-Id: 0bbb1be1-ce9c-44ae-9acb-8a41604ff8cf
	I0421 20:09:30.326172   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:30.327417   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"650","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0421 20:09:30.327882   12908 pod_ready.go:92] pod "kube-proxy-9zlm5" in "kube-system" namespace has status "Ready":"True"
	I0421 20:09:30.327950   12908 pod_ready.go:81] duration metric: took 355.464ms for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:30.327950   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:30.526121   12908 request.go:629] Waited for 198.0372ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:09:30.526121   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:09:30.526121   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:30.526121   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:30.526121   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:30.539712   12908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 20:09:30.539712   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:30.539712   12908 round_trippers.go:580]     Audit-Id: 937d8cef-a845-4c75-a659-756bd3640305
	I0421 20:09:30.539712   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:30.539712   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:30.539712   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:30.539712   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:30.540533   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:30 GMT
	I0421 20:09:30.540767   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kl8t2","generateName":"kube-proxy-","namespace":"kube-system","uid":"4154de9b-3a7c-4ed6-a987-82b6d539dc7e","resourceVersion":"405","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0421 20:09:30.727504   12908 request.go:629] Waited for 186.0064ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:09:30.727873   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:09:30.727914   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:30.727928   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:30.727957   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:30.731557   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:30.731557   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:30.731557   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:30.731557   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:30 GMT
	I0421 20:09:30.731557   12908 round_trippers.go:580]     Audit-Id: 52734223-ec17-4848-b2a1-468fba88c905
	I0421 20:09:30.731557   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:30.731557   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:30.731557   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:30.732564   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"455","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0421 20:09:30.733489   12908 pod_ready.go:92] pod "kube-proxy-kl8t2" in "kube-system" namespace has status "Ready":"True"
	I0421 20:09:30.733489   12908 pod_ready.go:81] duration metric: took 405.536ms for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:30.733562   12908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:30.927936   12908 request.go:629] Waited for 194.0308ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:09:30.928097   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:09:30.928097   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:30.928097   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:30.928097   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:30.932728   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:30.932728   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:30.932728   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:30.932728   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:30.932728   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:30.932728   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:30.933410   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:30 GMT
	I0421 20:09:30.933410   12908 round_trippers.go:580]     Audit-Id: 4f0fe468-608c-4134-91c6-94865acf8de2
	I0421 20:09:30.933549   12908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-152500","namespace":"kube-system","uid":"8178553d-7f1d-423a-89e5-41b226b2bb6d","resourceVersion":"328","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.mirror":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.seen":"2024-04-21T20:05:53.333724913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0421 20:09:31.116600   12908 request.go:629] Waited for 182.3542ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:09:31.116842   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes/multinode-152500
	I0421 20:09:31.117083   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:31.117166   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:31.117288   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:31.120866   12908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:09:31.120866   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:31.120968   12908 round_trippers.go:580]     Audit-Id: 91f13190-755a-4ad2-b644-dc33ead0a300
	I0421 20:09:31.120968   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:31.120968   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:31.120968   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:31.120968   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:31.120968   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:31 GMT
	I0421 20:09:31.121140   12908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"455","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0421 20:09:31.121673   12908 pod_ready.go:92] pod "kube-scheduler-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:09:31.121673   12908 pod_ready.go:81] duration metric: took 388.1078ms for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:09:31.121673   12908 pod_ready.go:38] duration metric: took 1.2015161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:09:31.121673   12908 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:09:31.139600   12908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:09:31.166796   12908 system_svc.go:56] duration metric: took 45.1226ms WaitForService to wait for kubelet
	I0421 20:09:31.166928   12908 kubeadm.go:576] duration metric: took 19.0699699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:09:31.166984   12908 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:09:31.321507   12908 request.go:629] Waited for 154.1149ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.198.190:8443/api/v1/nodes
	I0421 20:09:31.321566   12908 round_trippers.go:463] GET https://172.27.198.190:8443/api/v1/nodes
	I0421 20:09:31.321566   12908 round_trippers.go:469] Request Headers:
	I0421 20:09:31.321566   12908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:09:31.321674   12908 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:09:31.325960   12908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:09:31.325960   12908 round_trippers.go:577] Response Headers:
	I0421 20:09:31.325960   12908 round_trippers.go:580]     Audit-Id: bbb79c57-508a-45ed-a61e-3e1ec6734681
	I0421 20:09:31.326345   12908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:09:31.326345   12908 round_trippers.go:580]     Content-Type: application/json
	I0421 20:09:31.326345   12908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:09:31.326345   12908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:09:31.326345   12908 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:09:31 GMT
	I0421 20:09:31.326924   12908 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"651"},"items":[{"metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"455","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0421 20:09:31.327798   12908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:09:31.327798   12908 node_conditions.go:123] node cpu capacity is 2
	I0421 20:09:31.328149   12908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:09:31.328149   12908 node_conditions.go:123] node cpu capacity is 2
	I0421 20:09:31.328149   12908 node_conditions.go:105] duration metric: took 161.1632ms to run NodePressure ...
	I0421 20:09:31.328149   12908 start.go:240] waiting for startup goroutines ...
	I0421 20:09:31.328149   12908 start.go:254] writing updated cluster config ...
	I0421 20:09:31.342211   12908 ssh_runner.go:195] Run: rm -f paused
	I0421 20:09:31.494381   12908 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0421 20:09:31.499645   12908 out.go:177] * Done! kubectl is now configured to use "multinode-152500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.444244892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.469882040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.470203444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.471957961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.480393942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:06:21 multinode-152500 cri-dockerd[1225]: time="2024-04-21T20:06:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c9a9145e83af6559d13f8bcb0b1f62acfc8dc0a98fa46a08209c3e8f02c57f71/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 20:06:21 multinode-152500 cri-dockerd[1225]: time="2024-04-21T20:06:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d6ef972126a9057f2452dfedb9446b9d811d4c75e15834f4e2b33f8c43fd330a/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.869240247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.869581851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.869606451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:06:21 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:21.869729252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:06:22 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:22.053979024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:06:22 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:22.054138725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:06:22 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:22.054189726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:06:22 multinode-152500 dockerd[1326]: time="2024-04-21T20:06:22.054611129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:09:57 multinode-152500 dockerd[1326]: time="2024-04-21T20:09:57.171989587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:09:57 multinode-152500 dockerd[1326]: time="2024-04-21T20:09:57.173283192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:09:57 multinode-152500 dockerd[1326]: time="2024-04-21T20:09:57.174680197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:09:57 multinode-152500 dockerd[1326]: time="2024-04-21T20:09:57.175233999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:09:57 multinode-152500 cri-dockerd[1225]: time="2024-04-21T20:09:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3cc4feec2773e4c2a7e0cb4b9f5b570908d5adc715bd76990c42af1d4a252640/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 21 20:09:58 multinode-152500 cri-dockerd[1225]: time="2024-04-21T20:09:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 21 20:09:58 multinode-152500 dockerd[1326]: time="2024-04-21T20:09:58.826483817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:09:58 multinode-152500 dockerd[1326]: time="2024-04-21T20:09:58.826615918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:09:58 multinode-152500 dockerd[1326]: time="2024-04-21T20:09:58.826634518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:09:58 multinode-152500 dockerd[1326]: time="2024-04-21T20:09:58.827017619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	278fdd61d87c0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   3cc4feec2773e       busybox-fc5497c4f-l6544
	a6fab3c7e2816       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   d6ef972126a90       coredns-7db6d8ff4d-v7pf8
	bc85f90f7b185       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   c9a9145e83af6       storage-provisioner
	ad328e25a9d02       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   0e66350415f0c       kindnet-vb8ws
	7f128889bd612       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                0                   a3675838aa7c8       kube-proxy-kl8t2
	7ecc14e6d519e       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   5a55ab72d84e7       etcd-multinode-152500
	eb483e47dc21d       c42f13656d0b2                                                                                         5 minutes ago       Running             kube-apiserver            0                   6dd47a357dc90       kube-apiserver-multinode-152500
	0bd5af3b1831b       259c8277fcbbc                                                                                         5 minutes ago       Running             kube-scheduler            0                   b0eb5fe004810       kube-scheduler-multinode-152500
	0690342790fe5       c7aad43836fa5                                                                                         5 minutes ago       Running             kube-controller-manager   0                   e6ae7d993bb91       kube-controller-manager-multinode-152500
	
	
	==> coredns [a6fab3c7e281] <==
	[INFO] 10.244.1.2:50247 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196501s
	[INFO] 10.244.0.3:49015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000349501s
	[INFO] 10.244.0.3:52240 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117501s
	[INFO] 10.244.0.3:37053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001616s
	[INFO] 10.244.0.3:37130 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000252701s
	[INFO] 10.244.0.3:56209 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000306401s
	[INFO] 10.244.0.3:41964 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117s
	[INFO] 10.244.0.3:39822 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001671s
	[INFO] 10.244.0.3:48735 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001334s
	[INFO] 10.244.1.2:44124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002673s
	[INFO] 10.244.1.2:39375 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000849s
	[INFO] 10.244.1.2:47331 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000754s
	[INFO] 10.244.1.2:33685 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000723s
	[INFO] 10.244.0.3:49605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001058s
	[INFO] 10.244.0.3:54097 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000798s
	[INFO] 10.244.0.3:59400 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085901s
	[INFO] 10.244.0.3:38777 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001295s
	[INFO] 10.244.1.2:46340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116s
	[INFO] 10.244.1.2:38103 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173401s
	[INFO] 10.244.1.2:56467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001767s
	[INFO] 10.244.1.2:35140 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095401s
	[INFO] 10.244.0.3:56335 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002217s
	[INFO] 10.244.0.3:59693 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126301s
	[INFO] 10.244.0.3:33936 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000798s
	[INFO] 10.244.0.3:33049 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000631802s
	
	
	==> describe nodes <==
	Name:               multinode-152500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-152500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-152500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T20_05_54_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 20:05:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-152500
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:10:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:10:28 +0000   Sun, 21 Apr 2024 20:05:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:10:28 +0000   Sun, 21 Apr 2024 20:05:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:10:28 +0000   Sun, 21 Apr 2024 20:05:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:10:28 +0000   Sun, 21 Apr 2024 20:06:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.198.190
	  Hostname:    multinode-152500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa127788d5f64ca4bf7a80d5627cc0f6
	  System UUID:                f600d953-6b53-3d42-a020-58dc7452e9bc
	  Boot ID:                    d172d5cf-a356-4286-8abc-65739e4156ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l6544                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-7db6d8ff4d-v7pf8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m41s
	  kube-system                 etcd-multinode-152500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m55s
	  kube-system                 kindnet-vb8ws                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m41s
	  kube-system                 kube-apiserver-multinode-152500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-controller-manager-multinode-152500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-proxy-kl8t2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-scheduler-multinode-152500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m39s                  kube-proxy       
	  Normal  Starting                 5m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)    kubelet          Node multinode-152500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)    kubelet          Node multinode-152500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)    kubelet          Node multinode-152500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m55s (x2 over 4m55s)  kubelet          Node multinode-152500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x2 over 4m55s)  kubelet          Node multinode-152500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x2 over 4m55s)  kubelet          Node multinode-152500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m42s                  node-controller  Node multinode-152500 event: Registered Node multinode-152500 in Controller
	  Normal  NodeReady                4m28s                  kubelet          Node multinode-152500 status is now: NodeReady
	
	
	Name:               multinode-152500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-152500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-152500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T20_09_11_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 20:09:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-152500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:10:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:10:12 +0000   Sun, 21 Apr 2024 20:09:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:10:12 +0000   Sun, 21 Apr 2024 20:09:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:10:12 +0000   Sun, 21 Apr 2024 20:09:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:10:12 +0000   Sun, 21 Apr 2024 20:09:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.195.108
	  Hostname:    multinode-152500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 75a83d4f235d4ffaad6a8e197822a098
	  System UUID:                878d0256-95a4-6549-a6bd-12de64a17f7c
	  Boot ID:                    451bad06-7746-4983-9a6b-d284bd187ea8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-82tdr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-rkgsx              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      97s
	  kube-system                 kube-proxy-9zlm5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  RegisteredNode           97s                node-controller  Node multinode-152500-m02 event: Registered Node multinode-152500-m02 in Controller
	  Normal  NodeHasSufficientMemory  97s (x2 over 98s)  kubelet          Node multinode-152500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x2 over 98s)  kubelet          Node multinode-152500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x2 over 98s)  kubelet          Node multinode-152500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                kubelet          Node multinode-152500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.675912] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr21 20:04] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.175480] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[Apr21 20:05] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.105609] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.622216] systemd-fstab-generator[981]: Ignoring "noauto" option for root device
	[  +0.219517] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.266766] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +2.905413] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.212532] systemd-fstab-generator[1191]: Ignoring "noauto" option for root device
	[  +0.229947] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.307518] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +11.740964] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.126343] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.815788] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +7.659462] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.108679] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.051569] systemd-fstab-generator[2125]: Ignoring "noauto" option for root device
	[  +0.165306] kauditd_printk_skb: 62 callbacks suppressed
	[Apr21 20:06] systemd-fstab-generator[2309]: Ignoring "noauto" option for root device
	[  +0.211466] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.344273] kauditd_printk_skb: 51 callbacks suppressed
	[Apr21 20:09] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7ecc14e6d519] <==
	{"level":"info","ts":"2024-04-21T20:05:47.035662Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T20:05:47.039632Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.198.190:2379"}
	{"level":"info","ts":"2024-04-21T20:05:47.044148Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T20:05:47.048835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T20:05:47.059638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-04-21T20:06:15.359076Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.922826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:06:15.360145Z","caller":"traceutil/trace.go:171","msg":"trace[1006088935] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:411; }","duration":"306.138141ms","start":"2024-04-21T20:06:15.05399Z","end":"2024-04-21T20:06:15.360128Z","steps":["trace[1006088935] 'range keys from in-memory index tree'  (duration: 304.858825ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:06:15.360584Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-21T20:06:15.053973Z","time spent":"306.537246ms","remote":"127.0.0.1:48058","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-21T20:06:15.360858Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.571619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-152500\" ","response":"range_response_count:1 size:4487"}
	{"level":"info","ts":"2024-04-21T20:06:15.360925Z","caller":"traceutil/trace.go:171","msg":"trace[2039279407] range","detail":"{range_begin:/registry/minions/multinode-152500; range_end:; response_count:1; response_revision:411; }","duration":"133.747822ms","start":"2024-04-21T20:06:15.227166Z","end":"2024-04-21T20:06:15.360914Z","steps":["trace[2039279407] 'range keys from in-memory index tree'  (duration: 133.483618ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-21T20:06:15.361368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.30887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:06:15.361521Z","caller":"traceutil/trace.go:171","msg":"trace[70590978] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:411; }","duration":"176.486072ms","start":"2024-04-21T20:06:15.185026Z","end":"2024-04-21T20:06:15.361512Z","steps":["trace[70590978] 'range keys from in-memory index tree'  (duration: 176.255269ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:06:15.521578Z","caller":"traceutil/trace.go:171","msg":"trace[380991790] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"149.17982ms","start":"2024-04-21T20:06:15.372378Z","end":"2024-04-21T20:06:15.521557Z","steps":["trace[380991790] 'process raft request'  (duration: 148.980518ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:06:36.503104Z","caller":"traceutil/trace.go:171","msg":"trace[1751492349] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"141.346781ms","start":"2024-04-21T20:06:36.361724Z","end":"2024-04-21T20:06:36.50307Z","steps":["trace[1751492349] 'process raft request'  (duration: 141.20348ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:06:44.761412Z","caller":"traceutil/trace.go:171","msg":"trace[2052684170] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"125.040519ms","start":"2024-04-21T20:06:44.636347Z","end":"2024-04-21T20:06:44.761388Z","steps":["trace[2052684170] 'process raft request'  (duration: 124.824018ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:09:04.157669Z","caller":"traceutil/trace.go:171","msg":"trace[590649835] linearizableReadLoop","detail":"{readStateIndex:630; appliedIndex:629; }","duration":"104.971616ms","start":"2024-04-21T20:09:04.052679Z","end":"2024-04-21T20:09:04.15765Z","steps":["trace[590649835] 'read index received'  (duration: 104.931416ms)","trace[590649835] 'applied index is now lower than readState.Index'  (duration: 39.6µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:09:04.158125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.502418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-21T20:09:04.158187Z","caller":"traceutil/trace.go:171","msg":"trace[1084403408] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:581; }","duration":"105.606518ms","start":"2024-04-21T20:09:04.05257Z","end":"2024-04-21T20:09:04.158177Z","steps":["trace[1084403408] 'agreement among raft nodes before linearized reading'  (duration: 105.224817ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:09:04.158651Z","caller":"traceutil/trace.go:171","msg":"trace[1012175638] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"124.716794ms","start":"2024-04-21T20:09:04.033924Z","end":"2024-04-21T20:09:04.158641Z","steps":["trace[1012175638] 'process raft request'  (duration: 123.61839ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:09:14.406887Z","caller":"traceutil/trace.go:171","msg":"trace[1398036322] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"168.328959ms","start":"2024-04-21T20:09:14.238538Z","end":"2024-04-21T20:09:14.406867Z","steps":["trace[1398036322] 'process raft request'  (duration: 167.861457ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:09:21.5604Z","caller":"traceutil/trace.go:171","msg":"trace[1330940847] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"238.880329ms","start":"2024-04-21T20:09:21.321349Z","end":"2024-04-21T20:09:21.56023Z","steps":["trace[1330940847] 'process raft request'  (duration: 238.760628ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:09:21.566163Z","caller":"traceutil/trace.go:171","msg":"trace[2118843132] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"228.86339ms","start":"2024-04-21T20:09:21.337287Z","end":"2024-04-21T20:09:21.566151Z","steps":["trace[2118843132] 'process raft request'  (duration: 228.228587ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-21T20:09:21.565787Z","caller":"traceutil/trace.go:171","msg":"trace[1661952753] linearizableReadLoop","detail":"{readStateIndex:683; appliedIndex:682; }","duration":"141.199648ms","start":"2024-04-21T20:09:21.42457Z","end":"2024-04-21T20:09:21.56577Z","steps":["trace[1661952753] 'read index received'  (duration: 135.963128ms)","trace[1661952753] 'applied index is now lower than readState.Index'  (duration: 5.23602ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-21T20:09:21.568202Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.536557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-152500-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-04-21T20:09:21.568905Z","caller":"traceutil/trace.go:171","msg":"trace[416169402] range","detail":"{range_begin:/registry/minions/multinode-152500-m02; range_end:; response_count:1; response_revision:629; }","duration":"144.432062ms","start":"2024-04-21T20:09:21.424461Z","end":"2024-04-21T20:09:21.568893Z","steps":["trace[416169402] 'agreement among raft nodes before linearized reading'  (duration: 143.605058ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:10:48 up 7 min,  0 users,  load average: 0.26, 0.31, 0.17
	Linux multinode-152500 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ad328e25a9d0] <==
	I0421 20:09:47.112431       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:09:57.129733       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:09:57.130145       1 main.go:227] handling current node
	I0421 20:09:57.130272       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:09:57.130580       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:10:07.146678       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:10:07.146772       1 main.go:227] handling current node
	I0421 20:10:07.146787       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:10:07.147506       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:10:17.154451       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:10:17.154573       1 main.go:227] handling current node
	I0421 20:10:17.154589       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:10:17.154598       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:10:27.161955       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:10:27.162232       1 main.go:227] handling current node
	I0421 20:10:27.162526       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:10:27.162666       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:10:37.170166       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:10:37.170293       1 main.go:227] handling current node
	I0421 20:10:37.170309       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:10:37.170319       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:10:47.179092       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:10:47.179320       1 main.go:227] handling current node
	I0421 20:10:47.179339       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:10:47.179414       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [eb483e47dc21] <==
	I0421 20:05:50.854243       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0421 20:05:50.863036       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0421 20:05:50.863132       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0421 20:05:52.168907       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0421 20:05:52.306648       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0421 20:05:52.529214       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0421 20:05:52.559286       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.198.190]
	I0421 20:05:52.560501       1 controller.go:615] quota admission added evaluator for: endpoints
	I0421 20:05:52.585492       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0421 20:05:52.996413       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0421 20:05:53.288179       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0421 20:05:53.319895       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0421 20:05:53.379112       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0421 20:06:06.883969       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0421 20:06:07.153553       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0421 20:10:02.351225       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:61996: use of closed network connection
	E0421 20:10:02.918435       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:61998: use of closed network connection
	E0421 20:10:03.529541       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:62000: use of closed network connection
	E0421 20:10:04.094551       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:62002: use of closed network connection
	E0421 20:10:04.658713       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:62004: use of closed network connection
	E0421 20:10:05.212773       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:62006: use of closed network connection
	E0421 20:10:06.225763       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:62009: use of closed network connection
	E0421 20:10:16.796947       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:62011: use of closed network connection
	E0421 20:10:17.348424       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:62016: use of closed network connection
	E0421 20:10:27.926169       1 conn.go:339] Error on socket receive: read tcp 172.27.198.190:8443->172.27.192.1:62018: use of closed network connection
	
	
	==> kube-controller-manager [0690342790fe] <==
	I0421 20:06:07.526655       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0421 20:06:07.848603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="950.985127ms"
	I0421 20:06:07.914168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.37388ms"
	I0421 20:06:07.981239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.023912ms"
	I0421 20:06:07.981426       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="146.103µs"
	I0421 20:06:08.420617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.382968ms"
	I0421 20:06:08.469236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.5683ms"
	I0421 20:06:08.469695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.202µs"
	I0421 20:06:20.790271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="497.705µs"
	I0421 20:06:20.841367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.501µs"
	I0421 20:06:21.810761       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0421 20:06:23.250431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.6µs"
	I0421 20:06:23.338634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.590136ms"
	I0421 20:06:23.338743       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.901µs"
	I0421 20:09:11.051450       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-152500-m02\" does not exist"
	I0421 20:09:11.075589       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-152500-m02" podCIDRs=["10.244.1.0/24"]
	I0421 20:09:11.846696       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-152500-m02"
	I0421 20:09:29.719456       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:09:56.628625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.387442ms"
	I0421 20:09:56.669605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.858655ms"
	I0421 20:09:56.670085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0421 20:09:56.670437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.1µs"
	I0421 20:09:59.481408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.470647ms"
	I0421 20:09:59.497553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.105139ms"
	I0421 20:09:59.497729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.4µs"
	
	
	==> kube-proxy [7f128889bd61] <==
	I0421 20:06:08.871442       1 server_linux.go:69] "Using iptables proxy"
	I0421 20:06:08.919143       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.198.190"]
	I0421 20:06:08.999885       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 20:06:09.000253       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 20:06:09.000550       1 server_linux.go:165] "Using iptables Proxier"
	I0421 20:06:09.006102       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 20:06:09.008607       1 server.go:872] "Version info" version="v1.30.0"
	I0421 20:06:09.008971       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 20:06:09.013742       1 config.go:192] "Starting service config controller"
	I0421 20:06:09.014250       1 config.go:101] "Starting endpoint slice config controller"
	I0421 20:06:09.015000       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 20:06:09.015212       1 config.go:319] "Starting node config controller"
	I0421 20:06:09.020499       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 20:06:09.015112       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 20:06:09.120519       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 20:06:09.121078       1 shared_informer.go:320] Caches are synced for service config
	I0421 20:06:09.121101       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0bd5af3b1831] <==
	W0421 20:05:51.011936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0421 20:05:51.012043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 20:05:51.038577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 20:05:51.038737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 20:05:51.067122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:51.067226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:51.077278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 20:05:51.077955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 20:05:51.189663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 20:05:51.190622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 20:05:51.259498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 20:05:51.259866       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 20:05:51.289701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 20:05:51.290247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 20:05:51.312769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:51.313151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:51.317544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 20:05:51.317832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 20:05:51.395001       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:51.395127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:51.575075       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 20:05:51.575156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 20:05:51.605406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 20:05:51.606239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0421 20:05:52.716384       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 21 20:06:23 multinode-152500 kubelet[2132]: I0421 20:06:23.251648    2132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=8.251629798 podStartE2EDuration="8.251629798s" podCreationTimestamp="2024-04-21 20:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-21 20:06:22.234895127 +0000 UTC m=+29.067418022" watchObservedRunningTime="2024-04-21 20:06:23.251629798 +0000 UTC m=+30.084152693"
	Apr 21 20:06:23 multinode-152500 kubelet[2132]: I0421 20:06:23.288522    2132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v7pf8" podStartSLOduration=16.288506882 podStartE2EDuration="16.288506882s" podCreationTimestamp="2024-04-21 20:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-21 20:06:23.252126502 +0000 UTC m=+30.084649497" watchObservedRunningTime="2024-04-21 20:06:23.288506882 +0000 UTC m=+30.121029777"
	Apr 21 20:06:53 multinode-152500 kubelet[2132]: E0421 20:06:53.423614    2132 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:06:53 multinode-152500 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:06:53 multinode-152500 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:06:53 multinode-152500 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:06:53 multinode-152500 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:07:53 multinode-152500 kubelet[2132]: E0421 20:07:53.423595    2132 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:07:53 multinode-152500 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:07:53 multinode-152500 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:07:53 multinode-152500 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:07:53 multinode-152500 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:08:53 multinode-152500 kubelet[2132]: E0421 20:08:53.423160    2132 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:08:53 multinode-152500 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:08:53 multinode-152500 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:08:53 multinode-152500 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:08:53 multinode-152500 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:09:53 multinode-152500 kubelet[2132]: E0421 20:09:53.423032    2132 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:09:53 multinode-152500 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:09:53 multinode-152500 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:09:53 multinode-152500 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:09:53 multinode-152500 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:09:56 multinode-152500 kubelet[2132]: I0421 20:09:56.612510    2132 topology_manager.go:215] "Topology Admit Handler" podUID="62c649d2-6713-4642-96dc-8533faeb750f" podNamespace="default" podName="busybox-fc5497c4f-l6544"
	Apr 21 20:09:56 multinode-152500 kubelet[2132]: I0421 20:09:56.784916    2132 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsv5t\" (UniqueName: \"kubernetes.io/projected/62c649d2-6713-4642-96dc-8533faeb750f-kube-api-access-jsv5t\") pod \"busybox-fc5497c4f-l6544\" (UID: \"62c649d2-6713-4642-96dc-8533faeb750f\") " pod="default/busybox-fc5497c4f-l6544"
	Apr 21 20:09:57 multinode-152500 kubelet[2132]: I0421 20:09:57.404215    2132 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc4feec2773e4c2a7e0cb4b9f5b570908d5adc715bd76990c42af1d4a252640"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:10:40.439307    6980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-152500 -n multinode-152500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-152500 -n multinode-152500: (12.5083293s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-152500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (58.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (449.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-152500
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-152500
E0421 20:25:57.480873   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-152500: (1m40.8702282s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-152500 --wait=true -v=8 --alsologtostderr
E0421 20:27:54.256748   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 20:30:36.937657   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 20:32:00.131418   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-152500 --wait=true -v=8 --alsologtostderr: exit status 1 (5m10.1945994s)

                                                
                                                
-- stdout --
	* [multinode-152500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-152500" primary control-plane node in "multinode-152500" cluster
	* Restarting existing hyperv VM for "multinode-152500" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-152500-m02" worker node in "multinode-152500" cluster
	* Restarting existing hyperv VM for "multinode-152500-m02" ...
	* Found network options:
	  - NO_PROXY=172.27.197.221
	  - NO_PROXY=172.27.197.221
	* Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	  - env NO_PROXY=172.27.197.221
	* Verifying Kubernetes components...
	
	* Starting "multinode-152500-m03" worker node in "multinode-152500" cluster
	* Restarting existing hyperv VM for "multinode-152500-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:27:30.757242    7460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0421 20:27:30.836149    7460 out.go:291] Setting OutFile to fd 780 ...
	I0421 20:27:30.837153    7460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:27:30.837153    7460 out.go:304] Setting ErrFile to fd 748...
	I0421 20:27:30.837153    7460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:27:30.867766    7460 out.go:298] Setting JSON to false
	I0421 20:27:30.873064    7460 start.go:129] hostinfo: {"hostname":"minikube6","uptime":17126,"bootTime":1713714124,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 20:27:30.873064    7460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 20:27:30.999605    7460 out.go:177] * [multinode-152500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 20:27:31.154617    7460 notify.go:220] Checking for updates...
	I0421 20:27:31.199233    7460 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:27:31.347392    7460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 20:27:31.444033    7460 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 20:27:31.609378    7460 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 20:27:31.738376    7460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 20:27:31.855711    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:27:31.855865    7460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 20:27:37.441004    7460 out.go:177] * Using the hyperv driver based on existing profile
	I0421 20:27:37.556213    7460 start.go:297] selected driver: hyperv
	I0421 20:27:37.556732    7460 start.go:901] validating driver "hyperv" against &{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluste
rName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:27:37.556959    7460 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 20:27:37.616262    7460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:27:37.616262    7460 cni.go:84] Creating CNI manager for ""
	I0421 20:27:37.616262    7460 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 20:27:37.616463    7460 start.go:340] cluster config:
	{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:f
alse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:27:37.616463    7460 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:27:37.850017    7460 out.go:177] * Starting "multinode-152500" primary control-plane node in "multinode-152500" cluster
	I0421 20:27:38.001415    7460 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:27:38.002628    7460 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 20:27:38.002814    7460 cache.go:56] Caching tarball of preloaded images
	I0421 20:27:38.003218    7460 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 20:27:38.003559    7460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 20:27:38.003906    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:27:38.006976    7460 start.go:360] acquireMachinesLock for multinode-152500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:27:38.007175    7460 start.go:364] duration metric: took 120.6µs to acquireMachinesLock for "multinode-152500"
	I0421 20:27:38.007175    7460 start.go:96] Skipping create...Using existing machine configuration
	I0421 20:27:38.007175    7460 fix.go:54] fixHost starting: 
	I0421 20:27:38.007941    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:27:40.796629    7460 main.go:141] libmachine: [stdout =====>] : Off
	
	I0421 20:27:40.796629    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:40.796965    7460 fix.go:112] recreateIfNeeded on multinode-152500: state=Stopped err=<nil>
	W0421 20:27:40.797030    7460 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 20:27:40.802092    7460 out.go:177] * Restarting existing hyperv VM for "multinode-152500" ...
	I0421 20:27:40.804199    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-152500
	I0421 20:27:43.932256    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:27:43.932685    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:43.932685    7460 main.go:141] libmachine: Waiting for host to start...
	I0421 20:27:43.932685    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:27:46.202224    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:27:46.202404    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:46.202494    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:27:48.787474    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:27:48.787905    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:49.795361    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:27:52.017481    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:27:52.017727    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:52.017836    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:27:54.602569    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:27:54.602621    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:55.602995    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:27:57.824166    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:27:57.824695    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:57.824695    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:00.448610    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:28:00.448610    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:01.453914    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:03.637903    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:03.637903    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:03.637903    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:06.212801    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:28:06.213324    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:07.220091    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:09.447995    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:09.447995    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:09.448269    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:12.076402    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:12.076402    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:12.079747    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:14.207419    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:14.207495    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:14.207495    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:16.880216    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:16.880216    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:16.880996    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:28:16.883599    7460 machine.go:94] provisionDockerMachine start ...
	I0421 20:28:16.883599    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:19.048518    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:19.049464    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:19.049464    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:21.699792    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:21.700736    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:21.707110    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:21.707795    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:21.707795    7460 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 20:28:21.855619    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 20:28:21.855720    7460 buildroot.go:166] provisioning hostname "multinode-152500"
	I0421 20:28:21.855720    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:24.037388    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:24.037388    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:24.038181    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:26.699846    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:26.700099    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:26.706170    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:26.706868    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:26.706868    7460 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-152500 && echo "multinode-152500" | sudo tee /etc/hostname
	I0421 20:28:26.886257    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-152500
	
	I0421 20:28:26.886257    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:29.049131    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:29.049572    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:29.049671    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:31.702638    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:31.702638    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:31.710165    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:31.710311    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:31.710311    7460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-152500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-152500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-152500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:28:31.871951    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:28:31.871951    7460 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 20:28:31.871951    7460 buildroot.go:174] setting up certificates
	I0421 20:28:31.871951    7460 provision.go:84] configureAuth start
	I0421 20:28:31.871951    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:34.037047    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:34.037047    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:34.037153    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:36.679209    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:36.679209    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:36.679209    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:38.876126    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:38.876213    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:38.876213    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:41.532261    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:41.532261    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:41.532819    7460 provision.go:143] copyHostCerts
	I0421 20:28:41.532819    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 20:28:41.533324    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 20:28:41.533324    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 20:28:41.533324    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 20:28:41.534976    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 20:28:41.535267    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 20:28:41.535338    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 20:28:41.535771    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 20:28:41.536691    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 20:28:41.536691    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 20:28:41.536691    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 20:28:41.537500    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 20:28:41.537674    7460 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-152500 san=[127.0.0.1 172.27.197.221 localhost minikube multinode-152500]
	I0421 20:28:41.840504    7460 provision.go:177] copyRemoteCerts
	I0421 20:28:41.854358    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:28:41.854455    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:44.023427    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:44.024272    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:44.024272    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:46.675964    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:46.675964    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:46.676724    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:28:46.789409    7460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9350162s)
	I0421 20:28:46.789409    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 20:28:46.789994    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:28:46.842465    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 20:28:46.843145    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0421 20:28:46.896777    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 20:28:46.897321    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 20:28:46.946192    7460 provision.go:87] duration metric: took 15.0741318s to configureAuth
	I0421 20:28:46.946192    7460 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:28:46.946894    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:28:46.947060    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:49.121279    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:49.121279    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:49.122089    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:51.804741    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:51.804741    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:51.813802    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:51.814665    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:51.814665    7460 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 20:28:51.959471    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 20:28:51.959592    7460 buildroot.go:70] root file system type: tmpfs
	I0421 20:28:51.959969    7460 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 20:28:51.960135    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:54.154331    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:54.155025    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:54.155171    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:56.805005    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:56.806024    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:56.814855    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:56.815303    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:56.815303    7460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 20:28:56.992829    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 20:28:56.992829    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:59.153715    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:59.153957    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:59.154070    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:01.800225    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:01.800469    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:01.810052    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:29:01.810206    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:29:01.810206    7460 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 20:29:04.479887    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 20:29:04.479887    7460 machine.go:97] duration metric: took 47.5959417s to provisionDockerMachine
	I0421 20:29:04.479887    7460 start.go:293] postStartSetup for "multinode-152500" (driver="hyperv")
	I0421 20:29:04.479887    7460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:29:04.495796    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:29:04.495796    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:06.654332    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:06.654528    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:06.654636    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:09.225470    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:09.226306    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:09.226368    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:29:09.344020    7460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8481883s)
	I0421 20:29:09.357755    7460 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:29:09.365191    7460 command_runner.go:130] > NAME=Buildroot
	I0421 20:29:09.365191    7460 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 20:29:09.365191    7460 command_runner.go:130] > ID=buildroot
	I0421 20:29:09.365191    7460 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 20:29:09.365191    7460 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 20:29:09.365191    7460 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:29:09.365191    7460 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 20:29:09.365191    7460 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 20:29:09.365191    7460 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 20:29:09.365191    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 20:29:09.380836    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:29:09.403827    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 20:29:09.457424    7460 start.go:296] duration metric: took 4.9775008s for postStartSetup
	I0421 20:29:09.457692    7460 fix.go:56] duration metric: took 1m31.449852s for fixHost
	I0421 20:29:09.457812    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:11.577440    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:11.577492    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:11.577492    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:14.183266    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:14.183266    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:14.190440    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:29:14.191109    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:29:14.191109    7460 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0421 20:29:14.332423    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713731354.337368033
	
	I0421 20:29:14.332423    7460 fix.go:216] guest clock: 1713731354.337368033
	I0421 20:29:14.332423    7460 fix.go:229] Guest: 2024-04-21 20:29:14.337368033 +0000 UTC Remote: 2024-04-21 20:29:09.457777 +0000 UTC m=+98.804506201 (delta=4.879591033s)
	I0421 20:29:14.332596    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:16.478711    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:16.478711    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:16.478858    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:19.058323    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:19.058379    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:19.066574    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:29:19.067058    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:29:19.067154    7460 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713731354
	I0421 20:29:19.231299    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 20:29:14 UTC 2024
	
	I0421 20:29:19.231299    7460 fix.go:236] clock set: Sun Apr 21 20:29:14 UTC 2024
	 (err=<nil>)
	I0421 20:29:19.231299    7460 start.go:83] releasing machines lock for "multinode-152500", held for 1m41.2233877s
	I0421 20:29:19.231648    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:21.382240    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:21.382240    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:21.382240    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:23.997140    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:23.997516    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:24.001950    7460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:29:24.002027    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:24.014425    7460 ssh_runner.go:195] Run: cat /version.json
	I0421 20:29:24.014425    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:26.210810    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:26.210810    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:26.211475    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:26.222037    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:26.222037    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:26.222037    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:28.963571    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:28.963571    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:28.964199    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:29:28.996407    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:28.997334    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:28.997665    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:29:29.062039    7460 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0421 20:29:29.062242    7460 ssh_runner.go:235] Completed: cat /version.json: (5.0477797s)
	I0421 20:29:29.075567    7460 ssh_runner.go:195] Run: systemctl --version
	I0421 20:29:29.180742    7460 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 20:29:29.180855    7460 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1788659s)
	I0421 20:29:29.180931    7460 command_runner.go:130] > systemd 252 (252)
	I0421 20:29:29.180989    7460 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0421 20:29:29.194263    7460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 20:29:29.203039    7460 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0421 20:29:29.203788    7460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:29:29.217411    7460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:29:29.248486    7460 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0421 20:29:29.249448    7460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:29:29.249533    7460 start.go:494] detecting cgroup driver to use...
	I0421 20:29:29.249838    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:29:29.286676    7460 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0421 20:29:29.304163    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 20:29:29.343352    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 20:29:29.364847    7460 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 20:29:29.381897    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 20:29:29.418042    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:29:29.459121    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 20:29:29.494626    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:29:29.530585    7460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:29:29.568679    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 20:29:29.603356    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 20:29:29.641053    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 20:29:29.679587    7460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:29:29.702062    7460 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 20:29:29.715363    7460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:29:29.754036    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:29.989172    7460 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 20:29:30.028182    7460 start.go:494] detecting cgroup driver to use...
	I0421 20:29:30.042286    7460 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 20:29:30.068584    7460 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0421 20:29:30.069211    7460 command_runner.go:130] > [Unit]
	I0421 20:29:30.069211    7460 command_runner.go:130] > Description=Docker Application Container Engine
	I0421 20:29:30.069211    7460 command_runner.go:130] > Documentation=https://docs.docker.com
	I0421 20:29:30.069211    7460 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0421 20:29:30.069211    7460 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0421 20:29:30.069211    7460 command_runner.go:130] > StartLimitBurst=3
	I0421 20:29:30.069211    7460 command_runner.go:130] > StartLimitIntervalSec=60
	I0421 20:29:30.069339    7460 command_runner.go:130] > [Service]
	I0421 20:29:30.069339    7460 command_runner.go:130] > Type=notify
	I0421 20:29:30.069339    7460 command_runner.go:130] > Restart=on-failure
	I0421 20:29:30.069339    7460 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0421 20:29:30.069339    7460 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0421 20:29:30.069339    7460 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0421 20:29:30.069339    7460 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0421 20:29:30.069339    7460 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0421 20:29:30.069339    7460 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0421 20:29:30.069339    7460 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0421 20:29:30.069339    7460 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0421 20:29:30.069339    7460 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0421 20:29:30.069533    7460 command_runner.go:130] > ExecStart=
	I0421 20:29:30.069579    7460 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0421 20:29:30.069606    7460 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0421 20:29:30.069606    7460 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0421 20:29:30.069606    7460 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0421 20:29:30.069606    7460 command_runner.go:130] > LimitNOFILE=infinity
	I0421 20:29:30.069606    7460 command_runner.go:130] > LimitNPROC=infinity
	I0421 20:29:30.069679    7460 command_runner.go:130] > LimitCORE=infinity
	I0421 20:29:30.069710    7460 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0421 20:29:30.069710    7460 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0421 20:29:30.069710    7460 command_runner.go:130] > TasksMax=infinity
	I0421 20:29:30.069710    7460 command_runner.go:130] > TimeoutStartSec=0
	I0421 20:29:30.069710    7460 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0421 20:29:30.069710    7460 command_runner.go:130] > Delegate=yes
	I0421 20:29:30.069801    7460 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0421 20:29:30.069832    7460 command_runner.go:130] > KillMode=process
	I0421 20:29:30.069832    7460 command_runner.go:130] > [Install]
	I0421 20:29:30.069886    7460 command_runner.go:130] > WantedBy=multi-user.target
	I0421 20:29:30.086342    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:29:30.126233    7460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:29:30.194615    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:29:30.236538    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:29:30.278341    7460 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 20:29:30.351369    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:29:30.379191    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:29:30.419070    7460 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0421 20:29:30.432510    7460 ssh_runner.go:195] Run: which cri-dockerd
	I0421 20:29:30.440042    7460 command_runner.go:130] > /usr/bin/cri-dockerd
	I0421 20:29:30.453981    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 20:29:30.475542    7460 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 20:29:30.528275    7460 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 20:29:30.771084    7460 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 20:29:31.010761    7460 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 20:29:31.010761    7460 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 20:29:31.071550    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:31.322951    7460 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 20:29:34.031390    7460 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.708419s)
	I0421 20:29:34.049271    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 20:29:34.090397    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:29:34.131042    7460 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 20:29:34.378216    7460 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 20:29:34.612845    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:34.852624    7460 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 20:29:34.897992    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:29:34.940138    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:35.167304    7460 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 20:29:35.297563    7460 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 20:29:35.310546    7460 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 20:29:35.325248    7460 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0421 20:29:35.325248    7460 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 20:29:35.325248    7460 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0421 20:29:35.325526    7460 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0421 20:29:35.325577    7460 command_runner.go:130] > Access: 2024-04-21 20:29:35.205310822 +0000
	I0421 20:29:35.325577    7460 command_runner.go:130] > Modify: 2024-04-21 20:29:35.205310822 +0000
	I0421 20:29:35.325610    7460 command_runner.go:130] > Change: 2024-04-21 20:29:35.210310842 +0000
	I0421 20:29:35.325610    7460 command_runner.go:130] >  Birth: -
	I0421 20:29:35.325723    7460 start.go:562] Will wait 60s for crictl version
	I0421 20:29:35.340079    7460 ssh_runner.go:195] Run: which crictl
	I0421 20:29:35.346375    7460 command_runner.go:130] > /usr/bin/crictl
	I0421 20:29:35.359579    7460 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:29:35.424259    7460 command_runner.go:130] > Version:  0.1.0
	I0421 20:29:35.425335    7460 command_runner.go:130] > RuntimeName:  docker
	I0421 20:29:35.425335    7460 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0421 20:29:35.425335    7460 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 20:29:35.425387    7460 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 20:29:35.436886    7460 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:29:35.470737    7460 command_runner.go:130] > 26.0.1
	I0421 20:29:35.484332    7460 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:29:35.518238    7460 command_runner.go:130] > 26.0.1
	I0421 20:29:35.522080    7460 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 20:29:35.522080    7460 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 20:29:35.527431    7460 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 20:29:35.527431    7460 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 20:29:35.527431    7460 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 20:29:35.527431    7460 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 20:29:35.532045    7460 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 20:29:35.532092    7460 ip.go:210] interface addr: 172.27.192.1/20
	I0421 20:29:35.547086    7460 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 20:29:35.554368    7460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:29:35.579396    7460 kubeadm.go:877] updating cluster {Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-1
52500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.197.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 20:29:35.579696    7460 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:29:35.590556    7460 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 20:29:35.617594    7460 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0421 20:29:35.617594    7460 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0421 20:29:35.618318    7460 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0421 20:29:35.618318    7460 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:29:35.618318    7460 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0421 20:29:35.618608    7460 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0421 20:29:35.618608    7460 docker.go:615] Images already preloaded, skipping extraction
	I0421 20:29:35.630860    7460 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0421 20:29:35.657169    7460 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0421 20:29:35.657169    7460 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:29:35.657169    7460 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0421 20:29:35.657169    7460 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0421 20:29:35.657169    7460 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:29:35.657169    7460 kubeadm.go:928] updating node { 172.27.197.221 8443 v1.30.0 docker true true} ...
	I0421 20:29:35.657169    7460 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-152500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.197.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:29:35.667997    7460 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0421 20:29:35.706956    7460 command_runner.go:130] > cgroupfs
	I0421 20:29:35.707001    7460 cni.go:84] Creating CNI manager for ""
	I0421 20:29:35.707001    7460 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 20:29:35.707001    7460 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:29:35.707001    7460 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.197.221 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-152500 NodeName:multinode-152500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.197.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.197.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:29:35.707539    7460 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.197.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-152500"
	  kubeletExtraArgs:
	    node-ip: 172.27.197.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.197.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:29:35.722438    7460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:29:35.745977    7460 command_runner.go:130] > kubeadm
	I0421 20:29:35.745977    7460 command_runner.go:130] > kubectl
	I0421 20:29:35.745977    7460 command_runner.go:130] > kubelet
	I0421 20:29:35.746071    7460 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:29:35.760190    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:29:35.784056    7460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0421 20:29:35.822464    7460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:29:35.862860    7460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0421 20:29:35.917518    7460 ssh_runner.go:195] Run: grep 172.27.197.221	control-plane.minikube.internal$ /etc/hosts
	I0421 20:29:35.930181    7460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.197.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:29:35.974833    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:36.198145    7460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:29:36.230156    7460 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500 for IP: 172.27.197.221
	I0421 20:29:36.230156    7460 certs.go:194] generating shared ca certs ...
	I0421 20:29:36.230320    7460 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:36.230921    7460 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 20:29:36.231268    7460 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 20:29:36.231415    7460 certs.go:256] generating profile certs ...
	I0421 20:29:36.232154    7460 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\client.key
	I0421 20:29:36.232357    7460 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.1d1b6fdd
	I0421 20:29:36.232357    7460 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.1d1b6fdd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.197.221]
	I0421 20:29:36.404379    7460 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.1d1b6fdd ...
	I0421 20:29:36.404379    7460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.1d1b6fdd: {Name:mk151e37b2e5f23f4357e1c585ea50dfc55dbfb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:36.406331    7460 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.1d1b6fdd ...
	I0421 20:29:36.406331    7460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.1d1b6fdd: {Name:mkb0d5b8b39d1bdc0398c0c1cb49a0cc404c6b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:36.407372    7460 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.1d1b6fdd -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt
	I0421 20:29:36.421322    7460 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.1d1b6fdd -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key
	I0421 20:29:36.422747    7460 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key
	I0421 20:29:36.422747    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 20:29:36.422747    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 20:29:36.423165    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 20:29:36.423388    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 20:29:36.423528    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 20:29:36.423580    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 20:29:36.423896    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 20:29:36.425093    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 20:29:36.425516    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 20:29:36.425984    7460 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 20:29:36.425984    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 20:29:36.425984    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 20:29:36.426558    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 20:29:36.426926    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 20:29:36.427383    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 20:29:36.427666    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 20:29:36.427986    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 20:29:36.428232    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:36.429858    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:29:36.492937    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:29:36.562537    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:29:36.616734    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:29:36.668328    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 20:29:36.726384    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 20:29:36.781787    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:29:36.839237    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:29:36.890184    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 20:29:36.940090    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 20:29:36.989872    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:29:37.043671    7460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:29:37.095693    7460 ssh_runner.go:195] Run: openssl version
	I0421 20:29:37.104510    7460 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 20:29:37.119112    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 20:29:37.154406    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 20:29:37.162143    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:29:37.162143    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:29:37.176263    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 20:29:37.186413    7460 command_runner.go:130] > 3ec20f2e
	I0421 20:29:37.202562    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:29:37.240126    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:29:37.276546    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:37.285878    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:37.285967    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:37.299427    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:37.311059    7460 command_runner.go:130] > b5213941
	I0421 20:29:37.325728    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:29:37.360767    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 20:29:37.403042    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 20:29:37.411766    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:29:37.411766    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:29:37.426479    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 20:29:37.437451    7460 command_runner.go:130] > 51391683
	I0421 20:29:37.451209    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 20:29:37.489747    7460 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:29:37.496498    7460 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:29:37.496498    7460 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0421 20:29:37.496498    7460 command_runner.go:130] > Device: 8,1	Inode: 531538      Links: 1
	I0421 20:29:37.496498    7460 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 20:29:37.496498    7460 command_runner.go:130] > Access: 2024-04-21 20:05:40.258227500 +0000
	I0421 20:29:37.496498    7460 command_runner.go:130] > Modify: 2024-04-21 20:05:40.258227500 +0000
	I0421 20:29:37.496498    7460 command_runner.go:130] > Change: 2024-04-21 20:05:40.258227500 +0000
	I0421 20:29:37.496498    7460 command_runner.go:130] >  Birth: 2024-04-21 20:05:40.258227500 +0000
	I0421 20:29:37.511320    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 20:29:37.521184    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.535632    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 20:29:37.545997    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.559071    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 20:29:37.570595    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.583622    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 20:29:37.594874    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.608548    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 20:29:37.619856    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.633154    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 20:29:37.643354    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.643921    7460 kubeadm.go:391] StartCluster: {Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-1525
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.197.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:29:37.655586    7460 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 20:29:37.697329    7460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 20:29:37.719657    7460 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0421 20:29:37.719754    7460 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0421 20:29:37.719754    7460 command_runner.go:130] > /var/lib/minikube/etcd:
	I0421 20:29:37.719754    7460 command_runner.go:130] > member
	W0421 20:29:37.719831    7460 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 20:29:37.719831    7460 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 20:29:37.719910    7460 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 20:29:37.734242    7460 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 20:29:37.756788    7460 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:29:37.758174    7460 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-152500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:29:37.759147    7460 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-152500" cluster setting kubeconfig missing "multinode-152500" context setting]
	I0421 20:29:37.760127    7460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:37.775376    7460 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:29:37.776553    7460 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.197.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:29:37.778179    7460 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 20:29:37.792760    7460 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 20:29:37.813454    7460 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0421 20:29:37.813519    7460 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0421 20:29:37.813519    7460 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0421 20:29:37.813519    7460 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0421 20:29:37.813519    7460 command_runner.go:130] >  kind: InitConfiguration
	I0421 20:29:37.813759    7460 command_runner.go:130] >  localAPIEndpoint:
	I0421 20:29:37.813759    7460 command_runner.go:130] > -  advertiseAddress: 172.27.198.190
	I0421 20:29:37.813759    7460 command_runner.go:130] > +  advertiseAddress: 172.27.197.221
	I0421 20:29:37.813759    7460 command_runner.go:130] >    bindPort: 8443
	I0421 20:29:37.813759    7460 command_runner.go:130] >  bootstrapTokens:
	I0421 20:29:37.813759    7460 command_runner.go:130] >    - groups:
	I0421 20:29:37.813849    7460 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0421 20:29:37.813849    7460 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0421 20:29:37.813849    7460 command_runner.go:130] >    name: "multinode-152500"
	I0421 20:29:37.813934    7460 command_runner.go:130] >    kubeletExtraArgs:
	I0421 20:29:37.813934    7460 command_runner.go:130] > -    node-ip: 172.27.198.190
	I0421 20:29:37.813934    7460 command_runner.go:130] > +    node-ip: 172.27.197.221
	I0421 20:29:37.813934    7460 command_runner.go:130] >    taints: []
	I0421 20:29:37.813934    7460 command_runner.go:130] >  ---
	I0421 20:29:37.814106    7460 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0421 20:29:37.814106    7460 command_runner.go:130] >  kind: ClusterConfiguration
	I0421 20:29:37.814106    7460 command_runner.go:130] >  apiServer:
	I0421 20:29:37.814106    7460 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.27.198.190"]
	I0421 20:29:37.814106    7460 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.27.197.221"]
	I0421 20:29:37.814106    7460 command_runner.go:130] >    extraArgs:
	I0421 20:29:37.814106    7460 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0421 20:29:37.814106    7460 command_runner.go:130] >  controllerManager:
	I0421 20:29:37.814106    7460 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.27.198.190
	+  advertiseAddress: 172.27.197.221
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-152500"
	   kubeletExtraArgs:
	-    node-ip: 172.27.198.190
	+    node-ip: 172.27.197.221
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.27.198.190"]
	+  certSANs: ["127.0.0.1", "localhost", "172.27.197.221"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0421 20:29:37.814106    7460 kubeadm.go:1154] stopping kube-system containers ...
	I0421 20:29:37.827484    7460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 20:29:37.859712    7460 command_runner.go:130] > a6fab3c7e281
	I0421 20:29:37.859712    7460 command_runner.go:130] > bc85f90f7b18
	I0421 20:29:37.859712    7460 command_runner.go:130] > c9a9145e83af
	I0421 20:29:37.859712    7460 command_runner.go:130] > d6ef972126a9
	I0421 20:29:37.859712    7460 command_runner.go:130] > ad328e25a9d0
	I0421 20:29:37.859712    7460 command_runner.go:130] > 7f128889bd61
	I0421 20:29:37.859712    7460 command_runner.go:130] > 0e66350415f0
	I0421 20:29:37.859712    7460 command_runner.go:130] > a3675838aa7c
	I0421 20:29:37.859712    7460 command_runner.go:130] > 7ecc14e6d519
	I0421 20:29:37.859712    7460 command_runner.go:130] > eb483e47dc21
	I0421 20:29:37.859712    7460 command_runner.go:130] > 0bd5af3b1831
	I0421 20:29:37.859712    7460 command_runner.go:130] > 0690342790fe
	I0421 20:29:37.859712    7460 command_runner.go:130] > 5a55ab72d84e
	I0421 20:29:37.859712    7460 command_runner.go:130] > b0eb5fe00481
	I0421 20:29:37.859712    7460 command_runner.go:130] > 6dd47a357dc9
	I0421 20:29:37.859712    7460 command_runner.go:130] > e6ae7d993bb9
	I0421 20:29:37.862946    7460 docker.go:483] Stopping containers: [a6fab3c7e281 bc85f90f7b18 c9a9145e83af d6ef972126a9 ad328e25a9d0 7f128889bd61 0e66350415f0 a3675838aa7c 7ecc14e6d519 eb483e47dc21 0bd5af3b1831 0690342790fe 5a55ab72d84e b0eb5fe00481 6dd47a357dc9 e6ae7d993bb9]
	I0421 20:29:37.873667    7460 ssh_runner.go:195] Run: docker stop a6fab3c7e281 bc85f90f7b18 c9a9145e83af d6ef972126a9 ad328e25a9d0 7f128889bd61 0e66350415f0 a3675838aa7c 7ecc14e6d519 eb483e47dc21 0bd5af3b1831 0690342790fe 5a55ab72d84e b0eb5fe00481 6dd47a357dc9 e6ae7d993bb9
	I0421 20:29:37.901109    7460 command_runner.go:130] > a6fab3c7e281
	I0421 20:29:37.901109    7460 command_runner.go:130] > bc85f90f7b18
	I0421 20:29:37.901109    7460 command_runner.go:130] > c9a9145e83af
	I0421 20:29:37.901109    7460 command_runner.go:130] > d6ef972126a9
	I0421 20:29:37.901109    7460 command_runner.go:130] > ad328e25a9d0
	I0421 20:29:37.901109    7460 command_runner.go:130] > 7f128889bd61
	I0421 20:29:37.901109    7460 command_runner.go:130] > 0e66350415f0
	I0421 20:29:37.901109    7460 command_runner.go:130] > a3675838aa7c
	I0421 20:29:37.901109    7460 command_runner.go:130] > 7ecc14e6d519
	I0421 20:29:37.901109    7460 command_runner.go:130] > eb483e47dc21
	I0421 20:29:37.901109    7460 command_runner.go:130] > 0bd5af3b1831
	I0421 20:29:37.901109    7460 command_runner.go:130] > 0690342790fe
	I0421 20:29:37.901109    7460 command_runner.go:130] > 5a55ab72d84e
	I0421 20:29:37.901109    7460 command_runner.go:130] > b0eb5fe00481
	I0421 20:29:37.901109    7460 command_runner.go:130] > 6dd47a357dc9
	I0421 20:29:37.901109    7460 command_runner.go:130] > e6ae7d993bb9
	I0421 20:29:37.918633    7460 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 20:29:37.965924    7460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:29:37.986714    7460 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0421 20:29:37.986916    7460 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0421 20:29:37.986916    7460 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0421 20:29:37.986916    7460 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:29:37.987018    7460 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:29:37.987089    7460 kubeadm.go:156] found existing configuration files:
	
	I0421 20:29:38.000500    7460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:29:38.018614    7460 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:29:38.019067    7460 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:29:38.033002    7460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:29:38.070050    7460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:29:38.090150    7460 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:29:38.090306    7460 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:29:38.102851    7460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:29:38.136742    7460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:29:38.156682    7460 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:29:38.156682    7460 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:29:38.168669    7460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:29:38.209236    7460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:29:38.231178    7460 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:29:38.231178    7460 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:29:38.244170    7460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:29:38.277227    7460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:29:38.297987    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:38.582552    7460 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:29:38.582678    7460 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0421 20:29:38.582678    7460 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0421 20:29:38.582818    7460 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0421 20:29:38.582818    7460 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 20:29:38.582843    7460 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 20:29:38.582843    7460 command_runner.go:130] > [certs] Using the existing "sa" key
	I0421 20:29:38.582891    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:39.974290    7460 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:29:39.974405    7460 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:29:39.974405    7460 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:29:39.974470    7460 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:29:39.974470    7460 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:29:39.974547    7460 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:29:39.974547    7460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3915727s)
	I0421 20:29:39.974632    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:40.321904    7460 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:29:40.321978    7460 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:29:40.322043    7460 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0421 20:29:40.322043    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:40.437343    7460 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:29:40.437379    7460 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:29:40.437379    7460 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:29:40.437379    7460 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:29:40.437379    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:40.570386    7460 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:29:40.570478    7460 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:29:40.584429    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:41.087967    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:41.593558    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:42.088012    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:42.595185    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:42.627196    7460 command_runner.go:130] > 1865
	I0421 20:29:42.627196    7460 api_server.go:72] duration metric: took 2.0567023s to wait for apiserver process to appear ...
	I0421 20:29:42.627196    7460 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:29:42.627196    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:46.333263    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:29:46.333666    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:29:46.333736    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:46.388953    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:29:46.389557    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:29:46.628371    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:46.638351    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:29:46.638449    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:29:47.136338    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:47.143948    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:29:47.143948    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:29:47.641670    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:47.649675    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:29:47.649675    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:29:48.141885    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:48.148620    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 200:
	ok
	I0421 20:29:48.149744    7460 round_trippers.go:463] GET https://172.27.197.221:8443/version
	I0421 20:29:48.149904    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:48.149904    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:48.149904    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:48.163084    7460 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 20:29:48.163084    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:48.163084    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Content-Length: 263
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:48 GMT
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Audit-Id: 848e06fe-0510-4529-a147-ba67c906e378
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:48.163084    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:48.163084    7460 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0421 20:29:48.163084    7460 api_server.go:141] control plane version: v1.30.0
	I0421 20:29:48.163084    7460 api_server.go:131] duration metric: took 5.5358475s to wait for apiserver health ...
	I0421 20:29:48.163084    7460 cni.go:84] Creating CNI manager for ""
	I0421 20:29:48.163084    7460 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 20:29:48.166720    7460 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0421 20:29:48.181250    7460 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 20:29:48.191248    7460 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0421 20:29:48.191340    7460 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0421 20:29:48.191340    7460 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0421 20:29:48.191340    7460 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 20:29:48.191340    7460 command_runner.go:130] > Access: 2024-04-21 20:28:10.782547100 +0000
	I0421 20:29:48.191340    7460 command_runner.go:130] > Modify: 2024-04-18 23:25:47.000000000 +0000
	I0421 20:29:48.191340    7460 command_runner.go:130] > Change: 2024-04-21 20:28:01.443000000 +0000
	I0421 20:29:48.191340    7460 command_runner.go:130] >  Birth: -
	I0421 20:29:48.191523    7460 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 20:29:48.191614    7460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0421 20:29:48.244292    7460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 20:29:49.150827    7460 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0421 20:29:49.151603    7460 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0421 20:29:49.151603    7460 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0421 20:29:49.151603    7460 command_runner.go:130] > daemonset.apps/kindnet configured
	I0421 20:29:49.151647    7460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:29:49.151908    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:29:49.151950    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.151950    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.151950    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.159729    7460 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:29:49.159729    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.159729    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.159729    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.159729    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.159729    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.159729    7460 round_trippers.go:580]     Audit-Id: 4ab92c78-a462-4ce6-8a25-8aa97036617a
	I0421 20:29:49.159729    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.162047    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1859"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 85368 chars]
	I0421 20:29:49.168613    7460 system_pods.go:59] 12 kube-system pods found
	I0421 20:29:49.169551    7460 system_pods.go:61] "coredns-7db6d8ff4d-v7pf8" [2973ebed-006d-4495-b1a7-7b4472e46f23] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "etcd-multinode-152500" [e5f399f5-b04e-4ac1-8646-d103d2d8f74a] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kindnet-kvd8z" [e6d4f203-892a-4a67-a6aa-38161a3749da] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kindnet-rkgsx" [ba1febf0-40e8-4a24-83e0-cbb9f6c01e34] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kindnet-vb8ws" [3e6ed0fc-724b-4a52-9738-2d9ef84b57eb] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-apiserver-multinode-152500" [52744df0-77af-4caf-b69d-af2789c25eab] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-controller-manager-multinode-152500" [a3e58103-1fb6-4e2f-aa47-76f3a8cfd758] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-proxy-9zlm5" [61ba111b-28e9-40db-943d-22a595fdc27e] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-proxy-kl8t2" [4154de9b-3a7c-4ed6-a987-82b6d539dc7e] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-proxy-sp699" [8eab29a5-b24b-4d2c-a829-fbf2770ef34c] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-scheduler-multinode-152500" [8178553d-7f1d-423a-89e5-41b226b2bb6d] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "storage-provisioner" [2eea731d-6a0b-4404-8518-a088d879b487] Running
	I0421 20:29:49.169551    7460 system_pods.go:74] duration metric: took 17.9034ms to wait for pod list to return data ...
	I0421 20:29:49.169551    7460 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:29:49.169551    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes
	I0421 20:29:49.169551    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.169551    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.169551    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.175274    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:29:49.175274    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.176205    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.176246    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.176246    7460 round_trippers.go:580]     Audit-Id: 05f98e78-cd0c-4372-a45e-a3068abc31c7
	I0421 20:29:49.176246    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.176246    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.176246    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.176475    7460 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1859"},"items":[{"metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16289 chars]
	I0421 20:29:49.178208    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:29:49.178208    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:29:49.178286    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:29:49.178286    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:29:49.178286    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:29:49.178286    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:29:49.178337    7460 node_conditions.go:105] duration metric: took 8.7349ms to run NodePressure ...
	I0421 20:29:49.178337    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:49.543082    7460 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0421 20:29:49.543140    7460 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0421 20:29:49.543140    7460 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0421 20:29:49.543382    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0421 20:29:49.543443    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.543443    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.543472    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.577045    7460 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0421 20:29:49.577045    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.577045    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.577045    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.577045    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.577045    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.577045    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.577045    7460 round_trippers.go:580]     Audit-Id: 6e6b2985-a2c0-40a8-a94d-a8209578e4a2
	I0421 20:29:49.579061    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1865"},"items":[{"metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"e5f399f5-b04e-4ac1-8646-d103d2d8f74a","resourceVersion":"1863","creationTimestamp":"2024-04-21T20:05:53Z","deletionTimestamp":"2024-04-21T20:29:49Z","deletionGracePeriodSeconds":0,"labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.198.190:2379","kubernetes.io/config.hash":"5b332f12d2b025e34ff6a19060e65329","kubernetes.io/config.mirror":"5b332f12d2b025e34ff6a19060e65329","kubernetes.io/config.seen":"2024-04-21T20:05:53.333716613Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-2
1T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot [truncated 29325 chars]
	I0421 20:29:49.580806    7460 retry.go:31] will retry after 214.399221ms: kubelet not initialised
	I0421 20:29:49.796544    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0421 20:29:49.796588    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.796617    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.796617    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.808267    7460 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 20:29:49.808267    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.808267    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.808267    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.808267    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.808267    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.808267    7460 round_trippers.go:580]     Audit-Id: 05985484-8311-4706-803a-1a3e1f5d110d
	I0421 20:29:49.808267    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.810004    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1878"},"items":[{"metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"437e0c4d-b43f-48c8-9fee-93e3e8a81c6d","resourceVersion":"1873","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.197.221:2379","kubernetes.io/config.hash":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.mirror":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.seen":"2024-04-21T20:29:40.589799517Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0421 20:29:49.811805    7460 kubeadm.go:733] kubelet initialised
	I0421 20:29:49.811864    7460 kubeadm.go:734] duration metric: took 268.7225ms waiting for restarted kubelet to initialise ...
	I0421 20:29:49.811864    7460 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:29:49.811939    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:29:49.811939    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.811939    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.811939    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.821672    7460 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 20:29:49.821672    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.821672    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.821672    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.821672    7460 round_trippers.go:580]     Audit-Id: f3c069e7-3a61-4a63-aee9-6efe4ab11baa
	I0421 20:29:49.821672    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.821672    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.821672    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.823700    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1879"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 87967 chars]
	I0421 20:29:49.828114    7460 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:49.828418    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:49.828418    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.828418    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.828491    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.836244    7460 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:29:49.836244    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.836244    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.836244    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.836244    7460 round_trippers.go:580]     Audit-Id: f6e9de13-c425-44a9-9cc5-e76a736feacc
	I0421 20:29:49.836244    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.836244    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.836244    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.836833    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0421 20:29:49.837861    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:49.837861    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.837861    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.837861    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.851397    7460 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 20:29:49.852274    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.852358    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.852358    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.852358    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.852358    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.852390    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.852390    7460 round_trippers.go:580]     Audit-Id: 4ae82317-ccdb-454e-86fa-153a7e8dea15
	I0421 20:29:49.852390    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:49.852965    7460 pod_ready.go:97] node "multinode-152500" hosting pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.852965    7460 pod_ready.go:81] duration metric: took 24.7918ms for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:49.852965    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.852965    7460 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:49.852965    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-152500
	I0421 20:29:49.852965    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.852965    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.852965    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.869721    7460 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0421 20:29:49.869721    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.869721    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.869721    7460 round_trippers.go:580]     Audit-Id: afa2c9ad-d009-43cc-b361-ae3d66d29801
	I0421 20:29:49.869721    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.870770    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.870805    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.870805    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.871438    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"437e0c4d-b43f-48c8-9fee-93e3e8a81c6d","resourceVersion":"1873","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.197.221:2379","kubernetes.io/config.hash":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.mirror":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.seen":"2024-04-21T20:29:40.589799517Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0421 20:29:49.872514    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:49.872543    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.872543    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.872543    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.880717    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:29:49.880717    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.880717    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.880717    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.880717    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.880717    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.880717    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.880717    7460 round_trippers.go:580]     Audit-Id: 2fe21853-87e3-4030-a406-3338fd290166
	I0421 20:29:49.881400    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:49.881400    7460 pod_ready.go:97] node "multinode-152500" hosting pod "etcd-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.881400    7460 pod_ready.go:81] duration metric: took 28.4349ms for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:49.881400    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "etcd-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.881925    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:49.882114    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-152500
	I0421 20:29:49.882114    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.882114    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.882114    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.890421    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:29:49.890421    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.890421    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.890421    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.890835    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.890835    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.890835    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.890835    7460 round_trippers.go:580]     Audit-Id: 555ff6f9-2002-474e-b9b2-453b4347e81c
	I0421 20:29:49.891193    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-152500","namespace":"kube-system","uid":"6e73294a-2a7d-4f05-beb1-bb011d5f1f52","resourceVersion":"1875","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.197.221:8443","kubernetes.io/config.hash":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.mirror":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.seen":"2024-04-21T20:29:40.518049422Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0421 20:29:49.891391    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:49.891391    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.891391    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.891391    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.895775    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:49.895775    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.895775    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.896162    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.896162    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.896162    7460 round_trippers.go:580]     Audit-Id: 78a747f5-5492-45e9-a80e-5f7bb096d02c
	I0421 20:29:49.896162    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.896162    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.896282    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:49.896282    7460 pod_ready.go:97] node "multinode-152500" hosting pod "kube-apiserver-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.896282    7460 pod_ready.go:81] duration metric: took 14.3566ms for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:49.896282    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "kube-apiserver-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.896282    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:49.896815    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:29:49.896872    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.896872    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.896872    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.900693    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:49.900693    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.900693    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.900693    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.900693    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.900693    7460 round_trippers.go:580]     Audit-Id: 4994d00f-cda1-4eee-8fc8-6e1671fceb8f
	I0421 20:29:49.900693    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.900693    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.901684    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"1868","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0421 20:29:49.901684    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:49.902292    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.902292    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.902292    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.927526    7460 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0421 20:29:49.927948    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.927948    7460 round_trippers.go:580]     Audit-Id: 41c81ab6-a543-4abb-9f00-0976b5275192
	I0421 20:29:49.927948    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.927948    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.927948    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.927948    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.927948    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.930855    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:49.932052    7460 pod_ready.go:97] node "multinode-152500" hosting pod "kube-controller-manager-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.932052    7460 pod_ready.go:81] duration metric: took 35.7693ms for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:49.932105    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "kube-controller-manager-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.932144    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:50.000576    7460 request.go:629] Waited for 68.3953ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:29:50.000961    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:29:50.000961    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.000961    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.000961    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.005580    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.005580    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.005580    7460 round_trippers.go:580]     Audit-Id: 8f55617e-fe08-4428-88d8-1d8018df57ec
	I0421 20:29:50.005647    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.005647    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.005647    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.005647    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.005647    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.005770    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9zlm5","generateName":"kube-proxy-","namespace":"kube-system","uid":"61ba111b-28e9-40db-943d-22a595fdc27e","resourceVersion":"1803","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0421 20:29:50.209283    7460 request.go:629] Waited for 202.185ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:29:50.209283    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:29:50.209283    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.209283    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.209559    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.213642    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.213642    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.213642    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.213642    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.213642    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.213642    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.213642    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.213829    7460 round_trippers.go:580]     Audit-Id: 5aa4f8f0-c21e-43f8-8780-9ae607c967c9
	I0421 20:29:50.214016    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"1805","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4486 chars]
	I0421 20:29:50.214682    7460 pod_ready.go:97] node "multinode-152500-m02" hosting pod "kube-proxy-9zlm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m02" has status "Ready":"Unknown"
	I0421 20:29:50.214682    7460 pod_ready.go:81] duration metric: took 282.5354ms for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:50.214682    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m02" hosting pod "kube-proxy-9zlm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m02" has status "Ready":"Unknown"
	I0421 20:29:50.214682    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:50.397995    7460 request.go:629] Waited for 183.0561ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:29:50.398323    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:29:50.398323    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.398323    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.398323    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.403101    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.403101    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.403101    7460 round_trippers.go:580]     Audit-Id: bb2c5247-456d-49a9-954e-e4b3bbfed67b
	I0421 20:29:50.403101    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.403101    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.403101    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.403101    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.403101    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.403101    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kl8t2","generateName":"kube-proxy-","namespace":"kube-system","uid":"4154de9b-3a7c-4ed6-a987-82b6d539dc7e","resourceVersion":"1879","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6230 chars]
	I0421 20:29:50.602260    7460 request.go:629] Waited for 197.574ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:50.602260    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:50.602260    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.602260    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.602260    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.607225    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.607225    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.607225    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.607225    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.607225    7460 round_trippers.go:580]     Audit-Id: de72a3e3-53d4-458c-a438-54cc501af205
	I0421 20:29:50.607225    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.607225    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.607225    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.608097    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:50.609364    7460 pod_ready.go:97] node "multinode-152500" hosting pod "kube-proxy-kl8t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:50.609542    7460 pod_ready.go:81] duration metric: took 394.8579ms for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:50.609542    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "kube-proxy-kl8t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:50.609611    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:50.807980    7460 request.go:629] Waited for 198.2032ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:29:50.808289    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:29:50.808289    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.808289    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.808289    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.813046    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.813046    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.813116    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.813173    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.813173    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.813173    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.813173    7460 round_trippers.go:580]     Audit-Id: c4d12c3b-17c3-4ade-8228-e6e48183fc14
	I0421 20:29:50.813173    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.813326    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sp699","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab29a5-b24b-4d2c-a829-fbf2770ef34c","resourceVersion":"1781","creationTimestamp":"2024-04-21T20:13:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:13:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0421 20:29:50.997013    7460 request.go:629] Waited for 182.4308ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:29:50.997121    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:29:50.997121    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.997156    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.997156    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:51.000797    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:51.001741    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:51.001741    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:51.001741    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:51.001741    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:51.001805    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:51.001805    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:51 GMT
	I0421 20:29:51.001805    7460 round_trippers.go:580]     Audit-Id: b260768d-3785-4708-879e-c65d46b77d0b
	I0421 20:29:51.001946    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m03","uid":"9c2fb882-be16-4c12-815f-4dd3e35c66ee","resourceVersion":"1789","creationTimestamp":"2024-04-21T20:25:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_25_05_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:25:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0421 20:29:51.002046    7460 pod_ready.go:97] node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:29:51.002046    7460 pod_ready.go:81] duration metric: took 392.4325ms for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:51.002046    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:29:51.002046    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:51.201214    7460 request.go:629] Waited for 198.9119ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:29:51.201436    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:29:51.201475    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:51.201475    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:51.201507    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:51.216020    7460 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0421 20:29:51.217074    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:51.217074    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:51.217074    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:51 GMT
	I0421 20:29:51.217074    7460 round_trippers.go:580]     Audit-Id: f5df4fbd-0c2a-4dc5-8595-58dc469dbde6
	I0421 20:29:51.217074    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:51.217074    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:51.217074    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:51.218247    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-152500","namespace":"kube-system","uid":"8178553d-7f1d-423a-89e5-41b226b2bb6d","resourceVersion":"1871","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.mirror":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.seen":"2024-04-21T20:05:53.333724913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0421 20:29:51.405450    7460 request.go:629] Waited for 186.6206ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:51.405998    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:51.405998    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:51.406071    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:51.406071    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:51.411374    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:29:51.411374    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:51.411374    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:51.411374    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:51 GMT
	I0421 20:29:51.411374    7460 round_trippers.go:580]     Audit-Id: 8e442dc2-0ec4-47fa-970c-4dbb614a49a1
	I0421 20:29:51.411374    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:51.411374    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:51.411374    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:51.412047    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:51.412530    7460 pod_ready.go:97] node "multinode-152500" hosting pod "kube-scheduler-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:51.412611    7460 pod_ready.go:81] duration metric: took 410.5611ms for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:51.412611    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "kube-scheduler-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:51.412611    7460 pod_ready.go:38] duration metric: took 1.6006595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:29:51.412611    7460 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:29:51.437138    7460 command_runner.go:130] > -16
	I0421 20:29:51.437138    7460 ops.go:34] apiserver oom_adj: -16
	I0421 20:29:51.437138    7460 kubeadm.go:591] duration metric: took 13.7170937s to restartPrimaryControlPlane
	I0421 20:29:51.437387    7460 kubeadm.go:393] duration metric: took 13.7931163s to StartCluster
	I0421 20:29:51.437387    7460 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:51.437599    7460 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:29:51.440009    7460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:51.440925    7460 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.197.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 20:29:51.440925    7460 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:29:51.445169    7460 out.go:177] * Verifying Kubernetes components...
	I0421 20:29:51.452565    7460 out.go:177] * Enabled addons: 
	I0421 20:29:51.441755    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:29:51.457049    7460 addons.go:505] duration metric: took 16.0612ms for enable addons: enabled=[]
	I0421 20:29:51.475508    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:51.852219    7460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:29:51.887023    7460 node_ready.go:35] waiting up to 6m0s for node "multinode-152500" to be "Ready" ...
	I0421 20:29:51.887194    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:51.887194    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:51.887326    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:51.887326    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:51.894501    7460 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:29:51.894501    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:51.894554    7460 round_trippers.go:580]     Audit-Id: f9619b6d-d7a6-48bd-bed5-0824614a8ff7
	I0421 20:29:51.894554    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:51.894554    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:51.894554    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:51.894554    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:51.894554    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:51 GMT
	I0421 20:29:51.896417    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:52.388332    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:52.388332    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:52.388332    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:52.388332    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:52.392923    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:52.393092    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:52.393092    7460 round_trippers.go:580]     Audit-Id: 0fd1d6a0-7010-4caf-ae3c-5fef1b1708e0
	I0421 20:29:52.393092    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:52.393092    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:52.393092    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:52.393092    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:52.393092    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:52 GMT
	I0421 20:29:52.393402    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:52.887985    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:52.887985    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:52.888119    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:52.888119    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:52.892422    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:52.892877    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:52.892877    7460 round_trippers.go:580]     Audit-Id: 5d355065-e438-4fa1-bb10-9f228b65cf54
	I0421 20:29:52.892877    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:52.892877    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:52.892877    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:52.892877    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:52.892877    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:52 GMT
	I0421 20:29:52.893073    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:53.387940    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:53.388175    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:53.388175    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:53.388175    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:53.396655    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:29:53.397633    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:53.397659    7460 round_trippers.go:580]     Audit-Id: f1467efe-65a1-4a5a-b4ee-7230df6307dd
	I0421 20:29:53.397659    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:53.397659    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:53.397770    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:53.397770    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:53.397770    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:53 GMT
	I0421 20:29:53.397770    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:53.890988    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:53.891077    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:53.891077    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:53.891141    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:53.895066    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:53.895066    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:53.895066    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:53.895066    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:53.895066    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:53.895066    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:53.895066    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:53 GMT
	I0421 20:29:53.895685    7460 round_trippers.go:580]     Audit-Id: 0951338f-bd6c-4ad0-ac05-da9dfac3427a
	I0421 20:29:53.896042    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:53.896794    7460 node_ready.go:53] node "multinode-152500" has status "Ready":"False"
	I0421 20:29:54.389759    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:54.389759    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:54.389759    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:54.389862    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:54.396683    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:29:54.396683    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:54.396683    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:54.396683    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:54.396683    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:54 GMT
	I0421 20:29:54.396683    7460 round_trippers.go:580]     Audit-Id: 44f21199-f012-4683-8bc3-c6108c3dde16
	I0421 20:29:54.396683    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:54.396683    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:54.397330    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:54.889672    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:54.889735    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:54.889735    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:54.889735    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:54.893578    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:54.893578    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:54.893578    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:54.893578    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:54 GMT
	I0421 20:29:54.894165    7460 round_trippers.go:580]     Audit-Id: bb10f650-acb5-472b-9bdb-5992b986ce08
	I0421 20:29:54.894165    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:54.894165    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:54.894165    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:54.894626    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:55.388441    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:55.388441    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.388441    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.388441    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.393429    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:55.393429    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.393429    7460 round_trippers.go:580]     Audit-Id: 49464435-f019-4c0c-964b-3c1649f07f43
	I0421 20:29:55.393429    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.393429    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.393429    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.393429    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.393943    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.394473    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:55.888045    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:55.888252    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.888252    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.888252    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.892095    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:55.892262    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.892262    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.892262    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.892262    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.892262    7460 round_trippers.go:580]     Audit-Id: c8b05f98-b542-4b08-9dd9-8cd266749d28
	I0421 20:29:55.892262    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.892262    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.892355    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:55.893000    7460 node_ready.go:49] node "multinode-152500" has status "Ready":"True"
	I0421 20:29:55.893000    7460 node_ready.go:38] duration metric: took 4.0058955s for node "multinode-152500" to be "Ready" ...
	I0421 20:29:55.893139    7460 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:29:55.893139    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:29:55.893139    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.893139    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.893139    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.898737    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:29:55.898737    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.898737    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.898737    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.899244    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.899244    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.899244    7460 round_trippers.go:580]     Audit-Id: 5fd6fa6b-8bda-44a9-9688-b2291dc1c8aa
	I0421 20:29:55.899244    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.901787    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1908"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87393 chars]
	I0421 20:29:55.908545    7460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:55.910083    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:55.910083    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.910083    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.910083    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.914987    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:55.914987    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.914987    7460 round_trippers.go:580]     Audit-Id: 5cd9a830-e0ad-40a8-8705-137815d2acff
	I0421 20:29:55.914987    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.914987    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.914987    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.915583    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.915583    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.915674    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:55.916585    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:55.916661    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.916661    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.916661    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.920075    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:55.920075    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.920453    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.920453    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.920453    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.920453    7460 round_trippers.go:580]     Audit-Id: f0a2be9b-d555-4552-8b58-aec8f64fbb34
	I0421 20:29:55.920453    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.920453    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.920590    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:56.419369    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:56.419369    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:56.419369    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:56.419610    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:56.424605    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:56.424647    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:56.424686    7460 round_trippers.go:580]     Audit-Id: 6d6511eb-4bb4-44b6-8f22-0e4f6e8e781f
	I0421 20:29:56.424708    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:56.424708    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:56.424708    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:56.424708    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:56.424708    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:56 GMT
	I0421 20:29:56.426250    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:56.427107    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:56.427107    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:56.427107    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:56.427107    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:56.431776    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:56.431776    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:56.431776    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:56 GMT
	I0421 20:29:56.432310    7460 round_trippers.go:580]     Audit-Id: 95a7cfcd-e0e8-4b63-ab9a-a0585da570e9
	I0421 20:29:56.432310    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:56.432310    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:56.432310    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:56.432310    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:56.432508    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:56.923231    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:56.923511    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:56.923511    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:56.923511    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:56.927760    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:56.928424    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:56.928424    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:56.928424    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:56.928424    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:56 GMT
	I0421 20:29:56.928424    7460 round_trippers.go:580]     Audit-Id: 69e230fc-2c81-49eb-9506-50c74745b11f
	I0421 20:29:56.928424    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:56.928424    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:56.928701    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:56.929569    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:56.929675    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:56.929675    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:56.929675    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:56.932846    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:56.933155    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:56.933155    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:56.933155    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:56 GMT
	I0421 20:29:56.933155    7460 round_trippers.go:580]     Audit-Id: 9dabddad-4098-471b-8a65-3eb7c2628044
	I0421 20:29:56.933155    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:56.933155    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:56.933155    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:56.933548    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:57.410668    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:57.410827    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:57.410827    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:57.410827    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:57.415741    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:57.415741    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:57.415741    7460 round_trippers.go:580]     Audit-Id: d65cb4a7-41ae-425b-a06e-30d40115acbe
	I0421 20:29:57.415741    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:57.415741    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:57.415741    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:57.415741    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:57.415741    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:57 GMT
	I0421 20:29:57.416452    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:57.417103    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:57.417103    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:57.417103    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:57.417103    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:57.421305    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:57.421305    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:57.421305    7460 round_trippers.go:580]     Audit-Id: 47e8858b-d1b1-4bbd-b696-130bd36563cc
	I0421 20:29:57.421305    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:57.421305    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:57.421305    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:57.421305    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:57.421305    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:57 GMT
	I0421 20:29:57.421305    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:57.911785    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:57.911864    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:57.911864    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:57.911898    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:57.915251    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:57.915251    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:57.915796    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:57.915796    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:57 GMT
	I0421 20:29:57.915796    7460 round_trippers.go:580]     Audit-Id: 9836b6cd-9949-40df-90e6-10af4eb294be
	I0421 20:29:57.915796    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:57.915796    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:57.915937    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:57.916194    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:57.917526    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:57.917526    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:57.917605    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:57.917605    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:57.920025    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:29:57.920458    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:57.920505    7460 round_trippers.go:580]     Audit-Id: d120ae42-8f61-49ce-b761-b84228a399d9
	I0421 20:29:57.920505    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:57.920505    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:57.920505    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:57.920505    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:57.920505    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:57 GMT
	I0421 20:29:57.920939    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:57.921559    7460 pod_ready.go:102] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:29:58.423700    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:58.423700    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:58.423700    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:58.423700    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:58.427322    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:58.427778    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:58.427778    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:58.427843    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:58 GMT
	I0421 20:29:58.427843    7460 round_trippers.go:580]     Audit-Id: e900778c-9cb9-46ed-aa34-0dc4ca9825d2
	I0421 20:29:58.427843    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:58.427843    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:58.427843    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:58.428142    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:58.429046    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:58.429100    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:58.429100    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:58.429134    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:58.431428    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:29:58.431428    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:58.431428    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:58.432389    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:58.432389    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:58.432389    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:58 GMT
	I0421 20:29:58.432389    7460 round_trippers.go:580]     Audit-Id: 4b5fda83-9160-4aad-8630-ccc47bb05c32
	I0421 20:29:58.432389    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:58.432468    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:58.911977    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:58.912010    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:58.912082    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:58.912082    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:58.920596    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:29:58.920632    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:58.920632    7460 round_trippers.go:580]     Audit-Id: 501ccf0f-6126-4c71-896a-b0fe826bf161
	I0421 20:29:58.920691    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:58.920691    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:58.920691    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:58.920691    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:58.920691    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:58 GMT
	I0421 20:29:58.920691    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1925","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0421 20:29:58.921845    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:58.921877    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:58.921877    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:58.921949    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:58.926276    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:58.926276    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:58.926276    7460 round_trippers.go:580]     Audit-Id: f9c461f6-f77e-44d8-bd84-f33781ec9cc9
	I0421 20:29:58.926276    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:58.926276    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:58.926276    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:58.926276    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:58.926276    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:58 GMT
	I0421 20:29:58.926276    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.414648    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:59.414774    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.414838    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.414838    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.419200    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:59.419959    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.419959    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.419959    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.419959    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.419959    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.419959    7460 round_trippers.go:580]     Audit-Id: 6f54c1e9-1acc-4409-a416-2afdf3a0c805
	I0421 20:29:59.419959    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.420285    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1925","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0421 20:29:59.420890    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.420890    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.421107    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.421107    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.426141    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:29:59.426141    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.426141    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.426141    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.426141    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.426141    7460 round_trippers.go:580]     Audit-Id: 070bf4bd-7b68-4c3e-b3a2-2b6b3cd74eac
	I0421 20:29:59.426141    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.426141    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.427553    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.919894    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:59.919969    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.919969    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.919969    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.923306    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:59.923914    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.923914    7460 round_trippers.go:580]     Audit-Id: 1e3183f8-dfdc-4a75-842e-294e4824144e
	I0421 20:29:59.923914    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.923914    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.923914    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.923914    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.923914    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.925297    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0421 20:29:59.926246    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.926246    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.926299    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.926299    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.940135    7460 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 20:29:59.940135    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.940135    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.940135    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.940135    7460 round_trippers.go:580]     Audit-Id: d9ac6475-6caa-4df3-8db8-9878e650f378
	I0421 20:29:59.940135    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.940135    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.940135    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.941144    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.942893    7460 pod_ready.go:92] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:29:59.942893    7460 pod_ready.go:81] duration metric: took 4.0329416s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.942893    7460 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.942893    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-152500
	I0421 20:29:59.942893    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.942893    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.942893    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.952679    7460 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 20:29:59.952679    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.952679    7460 round_trippers.go:580]     Audit-Id: fa81c032-1600-4fe7-a5b6-f7cf9bb44185
	I0421 20:29:59.952679    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.952679    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.953057    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.953057    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.953057    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.954048    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"437e0c4d-b43f-48c8-9fee-93e3e8a81c6d","resourceVersion":"1914","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.197.221:2379","kubernetes.io/config.hash":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.mirror":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.seen":"2024-04-21T20:29:40.589799517Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0421 20:29:59.954750    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.954791    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.954832    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.954832    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.961777    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:29:59.961777    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.961777    7460 round_trippers.go:580]     Audit-Id: 2259ae4b-8980-4078-ace3-0167a6cdbcf2
	I0421 20:29:59.961777    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.961777    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.961777    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.961777    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.961777    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.961777    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.962819    7460 pod_ready.go:92] pod "etcd-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:29:59.962819    7460 pod_ready.go:81] duration metric: took 19.9256ms for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.962819    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.962819    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-152500
	I0421 20:29:59.962819    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.962819    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.962819    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.966468    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:59.966468    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.966468    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.966468    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.966468    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.966468    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.966468    7460 round_trippers.go:580]     Audit-Id: b5250bfa-0053-4caa-9ce0-6d0f840536cb
	I0421 20:29:59.966468    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.966468    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-152500","namespace":"kube-system","uid":"6e73294a-2a7d-4f05-beb1-bb011d5f1f52","resourceVersion":"1911","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.197.221:8443","kubernetes.io/config.hash":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.mirror":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.seen":"2024-04-21T20:29:40.518049422Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0421 20:29:59.966468    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.966468    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.966468    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.966468    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.970715    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:59.970715    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.971468    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.971468    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.971468    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.971468    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.971468    7460 round_trippers.go:580]     Audit-Id: 26ffaf05-56b6-4078-88ad-d45404ef5e71
	I0421 20:29:59.971520    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.971677    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.971677    7460 pod_ready.go:92] pod "kube-apiserver-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:29:59.971677    7460 pod_ready.go:81] duration metric: took 8.858ms for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.971677    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.971677    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:29:59.971677    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.971677    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.971677    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.974573    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:29:59.974573    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.974573    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.974573    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.974573    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.974573    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.974573    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.975169    7460 round_trippers.go:580]     Audit-Id: 399d1462-2ec5-414e-be67-78f6c4f915a3
	I0421 20:29:59.975206    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"1868","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0421 20:29:59.976293    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.976293    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.976345    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.976345    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.979255    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:29:59.979364    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.979364    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.979364    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.979364    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.979364    7460 round_trippers.go:580]     Audit-Id: 9a2a1722-a474-4b7f-a07d-b6eaf1292ea4
	I0421 20:29:59.979364    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.979364    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.979610    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:30:00.486049    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:30:00.486122    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.486122    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.486186    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.496575    7460 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 20:30:00.497315    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.497315    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.497387    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.497387    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.497387    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.497387    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.497387    7460 round_trippers.go:580]     Audit-Id: 884c4f84-eff9-4a83-adae-794d67ac84db
	I0421 20:30:00.497935    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"1946","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0421 20:30:00.498820    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:00.498881    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.498881    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.498881    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.501210    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:30:00.501210    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.501210    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.501210    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.501210    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.501210    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.501210    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.501210    7460 round_trippers.go:580]     Audit-Id: 3cc5b87a-364d-4980-8787-dc0fd20a4c39
	I0421 20:30:00.502189    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:30:00.502189    7460 pod_ready.go:92] pod "kube-controller-manager-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:30:00.502189    7460 pod_ready.go:81] duration metric: took 530.508ms for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.502189    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.502189    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:30:00.502189    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.503019    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.503019    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.507417    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:00.507417    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.507417    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.507417    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.507417    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.508411    7460 round_trippers.go:580]     Audit-Id: 312e496c-a156-4867-b951-d30bf7195762
	I0421 20:30:00.508411    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.508411    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.508411    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9zlm5","generateName":"kube-proxy-","namespace":"kube-system","uid":"61ba111b-28e9-40db-943d-22a595fdc27e","resourceVersion":"1803","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0421 20:30:00.509309    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:30:00.509336    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.509336    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.509336    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.511786    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:30:00.511786    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.511786    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.511786    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.511786    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.511786    7460 round_trippers.go:580]     Audit-Id: 4d775e2b-9978-40ab-a90d-c2c6d12d839d
	I0421 20:30:00.511786    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.511786    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.511786    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"1934","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0421 20:30:00.511786    7460 pod_ready.go:97] node "multinode-152500-m02" hosting pod "kube-proxy-9zlm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m02" has status "Ready":"Unknown"
	I0421 20:30:00.513117    7460 pod_ready.go:81] duration metric: took 10.9283ms for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	E0421 20:30:00.513117    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m02" hosting pod "kube-proxy-9zlm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m02" has status "Ready":"Unknown"
	I0421 20:30:00.513117    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.533295    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:30:00.533295    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.533295    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.533295    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.536702    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:30:00.536702    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.536702    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.536702    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.536702    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.536702    7460 round_trippers.go:580]     Audit-Id: fdf3694e-b427-4029-8cf2-323b3e567205
	I0421 20:30:00.536702    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.536702    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.536963    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kl8t2","generateName":"kube-proxy-","namespace":"kube-system","uid":"4154de9b-3a7c-4ed6-a987-82b6d539dc7e","resourceVersion":"1893","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0421 20:30:00.721414    7460 request.go:629] Waited for 183.5314ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:00.721414    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:00.721573    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.721573    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.721573    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.724956    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:30:00.725426    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.725426    7460 round_trippers.go:580]     Audit-Id: f07d4f1e-946c-45a5-bc05-e58a49ac5ef0
	I0421 20:30:00.725426    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.725500    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.725500    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.725534    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.725534    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.725534    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:30:00.726547    7460 pod_ready.go:92] pod "kube-proxy-kl8t2" in "kube-system" namespace has status "Ready":"True"
	I0421 20:30:00.726600    7460 pod_ready.go:81] duration metric: took 213.334ms for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.726633    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.925717    7460 request.go:629] Waited for 198.6835ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:30:00.925811    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:30:00.925811    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.925811    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.925811    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.931131    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:30:00.931297    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.931297    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.931297    7460 round_trippers.go:580]     Audit-Id: 624aff71-0f8c-4452-a719-a06936c18a5d
	I0421 20:30:00.931297    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.931297    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.931297    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.931297    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.931297    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sp699","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab29a5-b24b-4d2c-a829-fbf2770ef34c","resourceVersion":"1781","creationTimestamp":"2024-04-21T20:13:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:13:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0421 20:30:01.128974    7460 request.go:629] Waited for 196.4329ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:30:01.129162    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:30:01.129162    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.129162    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.129162    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.133935    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:01.133935    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.133935    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.133935    7460 round_trippers.go:580]     Audit-Id: 85e40b6a-afcd-4b05-8a99-fd29caee1690
	I0421 20:30:01.133935    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.133935    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.133935    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.133935    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.134481    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m03","uid":"9c2fb882-be16-4c12-815f-4dd3e35c66ee","resourceVersion":"1928","creationTimestamp":"2024-04-21T20:25:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_25_05_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:25:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4302 chars]
	I0421 20:30:01.134773    7460 pod_ready.go:97] node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:30:01.134773    7460 pod_ready.go:81] duration metric: took 408.1367ms for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	E0421 20:30:01.134773    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:30:01.134773    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:01.334620    7460 request.go:629] Waited for 199.472ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:30:01.334780    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:30:01.334780    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.334780    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.334780    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.339112    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:01.339182    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.339182    7460 round_trippers.go:580]     Audit-Id: 5691c04e-7bac-4f2b-b0e5-73d123053b7b
	I0421 20:30:01.339182    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.339182    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.339182    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.339182    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.339182    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.339335    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-152500","namespace":"kube-system","uid":"8178553d-7f1d-423a-89e5-41b226b2bb6d","resourceVersion":"1907","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.mirror":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.seen":"2024-04-21T20:05:53.333724913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0421 20:30:01.534635    7460 request.go:629] Waited for 194.4132ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:01.534635    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:01.534635    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.534635    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.534635    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.540045    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:30:01.540045    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.540045    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.540045    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.540045    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.540045    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.540045    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.540045    7460 round_trippers.go:580]     Audit-Id: b20bd934-64fe-4d54-929b-2abcc6fda74a
	I0421 20:30:01.540045    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:30:01.540822    7460 pod_ready.go:92] pod "kube-scheduler-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:30:01.540822    7460 pod_ready.go:81] duration metric: took 406.0457ms for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:01.540923    7460 pod_ready.go:38] duration metric: took 5.6477431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:30:01.540923    7460 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:30:01.556255    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:30:01.587708    7460 command_runner.go:130] > 1865
	I0421 20:30:01.588273    7460 api_server.go:72] duration metric: took 10.1472738s to wait for apiserver process to appear ...
	I0421 20:30:01.588273    7460 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:30:01.588397    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:30:01.598545    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 200:
	ok
	I0421 20:30:01.598545    7460 round_trippers.go:463] GET https://172.27.197.221:8443/version
	I0421 20:30:01.598545    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.598545    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.598545    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.600521    7460 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 20:30:01.601512    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.601512    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Content-Length: 263
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Audit-Id: 3a161c80-1cdc-4f35-a9e7-afe66581b79b
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.601512    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.602531    7460 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0421 20:30:01.602531    7460 api_server.go:141] control plane version: v1.30.0
	I0421 20:30:01.602531    7460 api_server.go:131] duration metric: took 14.2587ms to wait for apiserver health ...
	I0421 20:30:01.602531    7460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:30:01.721488    7460 request.go:629] Waited for 118.8332ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:30:01.721703    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:30:01.721703    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.721703    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.721703    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.732512    7460 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 20:30:01.732512    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.732512    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.732512    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.732512    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.732512    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.732512    7460 round_trippers.go:580]     Audit-Id: 358d8317-9060-4f4b-aa51-e99f3ff5e13c
	I0421 20:30:01.732512    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.733803    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0421 20:30:01.738400    7460 system_pods.go:59] 12 kube-system pods found
	I0421 20:30:01.738454    7460 system_pods.go:61] "coredns-7db6d8ff4d-v7pf8" [2973ebed-006d-4495-b1a7-7b4472e46f23] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "etcd-multinode-152500" [437e0c4d-b43f-48c8-9fee-93e3e8a81c6d] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kindnet-kvd8z" [e6d4f203-892a-4a67-a6aa-38161a3749da] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kindnet-rkgsx" [ba1febf0-40e8-4a24-83e0-cbb9f6c01e34] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kindnet-vb8ws" [3e6ed0fc-724b-4a52-9738-2d9ef84b57eb] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kube-apiserver-multinode-152500" [6e73294a-2a7d-4f05-beb1-bb011d5f1f52] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kube-controller-manager-multinode-152500" [a3e58103-1fb6-4e2f-aa47-76f3a8cfd758] Running
	I0421 20:30:01.738536    7460 system_pods.go:61] "kube-proxy-9zlm5" [61ba111b-28e9-40db-943d-22a595fdc27e] Running
	I0421 20:30:01.738536    7460 system_pods.go:61] "kube-proxy-kl8t2" [4154de9b-3a7c-4ed6-a987-82b6d539dc7e] Running
	I0421 20:30:01.738536    7460 system_pods.go:61] "kube-proxy-sp699" [8eab29a5-b24b-4d2c-a829-fbf2770ef34c] Running
	I0421 20:30:01.738568    7460 system_pods.go:61] "kube-scheduler-multinode-152500" [8178553d-7f1d-423a-89e5-41b226b2bb6d] Running
	I0421 20:30:01.738568    7460 system_pods.go:61] "storage-provisioner" [2eea731d-6a0b-4404-8518-a088d879b487] Running
	I0421 20:30:01.738568    7460 system_pods.go:74] duration metric: took 136.0352ms to wait for pod list to return data ...
	I0421 20:30:01.738568    7460 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:30:01.923876    7460 request.go:629] Waited for 184.8928ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/default/serviceaccounts
	I0421 20:30:01.924040    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/default/serviceaccounts
	I0421 20:30:01.924040    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.924040    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.924040    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.928539    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:01.928539    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.928539    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.928539    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Content-Length: 262
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Audit-Id: 0447afe9-70ce-4eeb-8c9d-d0f1807ef1cd
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.928539    7460 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a4620806-bbb0-42e7-af50-a593b05fe653","resourceVersion":"352","creationTimestamp":"2024-04-21T20:06:07Z"}}]}
	I0421 20:30:01.928539    7460 default_sa.go:45] found service account: "default"
	I0421 20:30:01.929102    7460 default_sa.go:55] duration metric: took 189.97ms for default service account to be created ...
	I0421 20:30:01.929102    7460 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:30:02.127021    7460 request.go:629] Waited for 197.5426ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:30:02.127534    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:30:02.127534    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:02.127534    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:02.127534    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:02.134435    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:30:02.135102    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:02.135102    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:02.135102    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:02.135102    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:02.135102    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:02.135102    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:02 GMT
	I0421 20:30:02.135102    7460 round_trippers.go:580]     Audit-Id: 5f368fce-b0f0-4fa1-82af-643d547059f0
	I0421 20:30:02.137051    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0421 20:30:02.141500    7460 system_pods.go:86] 12 kube-system pods found
	I0421 20:30:02.141500    7460 system_pods.go:89] "coredns-7db6d8ff4d-v7pf8" [2973ebed-006d-4495-b1a7-7b4472e46f23] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "etcd-multinode-152500" [437e0c4d-b43f-48c8-9fee-93e3e8a81c6d] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kindnet-kvd8z" [e6d4f203-892a-4a67-a6aa-38161a3749da] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kindnet-rkgsx" [ba1febf0-40e8-4a24-83e0-cbb9f6c01e34] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kindnet-vb8ws" [3e6ed0fc-724b-4a52-9738-2d9ef84b57eb] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-apiserver-multinode-152500" [6e73294a-2a7d-4f05-beb1-bb011d5f1f52] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-controller-manager-multinode-152500" [a3e58103-1fb6-4e2f-aa47-76f3a8cfd758] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-proxy-9zlm5" [61ba111b-28e9-40db-943d-22a595fdc27e] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-proxy-kl8t2" [4154de9b-3a7c-4ed6-a987-82b6d539dc7e] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-proxy-sp699" [8eab29a5-b24b-4d2c-a829-fbf2770ef34c] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-scheduler-multinode-152500" [8178553d-7f1d-423a-89e5-41b226b2bb6d] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "storage-provisioner" [2eea731d-6a0b-4404-8518-a088d879b487] Running
	I0421 20:30:02.141500    7460 system_pods.go:126] duration metric: took 212.3968ms to wait for k8s-apps to be running ...
	I0421 20:30:02.141500    7460 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:30:02.153438    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:30:02.181415    7460 system_svc.go:56] duration metric: took 39.9139ms WaitForService to wait for kubelet
	I0421 20:30:02.181492    7460 kubeadm.go:576] duration metric: took 10.7404884s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:30:02.181556    7460 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:30:02.328512    7460 request.go:629] Waited for 146.7643ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes
	I0421 20:30:02.328512    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes
	I0421 20:30:02.328512    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:02.328512    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:02.328512    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:02.333200    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:02.333200    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:02.333200    7460 round_trippers.go:580]     Audit-Id: f25be68c-7c94-43f1-bcbc-fd0154528834
	I0421 20:30:02.333681    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:02.333681    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:02.333681    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:02.333681    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:02.333681    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:02 GMT
	I0421 20:30:02.334251    7460 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16162 chars]
	I0421 20:30:02.335208    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:30:02.335284    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:30:02.335284    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:30:02.335284    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:30:02.335284    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:30:02.335284    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:30:02.335284    7460 node_conditions.go:105] duration metric: took 153.6626ms to run NodePressure ...
	I0421 20:30:02.335376    7460 start.go:240] waiting for startup goroutines ...
	I0421 20:30:02.335376    7460 start.go:245] waiting for cluster config update ...
	I0421 20:30:02.335376    7460 start.go:254] writing updated cluster config ...
	I0421 20:30:02.339633    7460 out.go:177] 
	I0421 20:30:02.342737    7460 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:30:02.353499    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:30:02.353499    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:30:02.359896    7460 out.go:177] * Starting "multinode-152500-m02" worker node in "multinode-152500" cluster
	I0421 20:30:02.364793    7460 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:30:02.364793    7460 cache.go:56] Caching tarball of preloaded images
	I0421 20:30:02.364793    7460 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 20:30:02.364793    7460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 20:30:02.365853    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:30:02.368123    7460 start.go:360] acquireMachinesLock for multinode-152500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:30:02.368123    7460 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-152500-m02"
	I0421 20:30:02.368567    7460 start.go:96] Skipping create...Using existing machine configuration
	I0421 20:30:02.368638    7460 fix.go:54] fixHost starting: m02
	I0421 20:30:02.368829    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:04.516359    7460 main.go:141] libmachine: [stdout =====>] : Off
	
	I0421 20:30:04.516359    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:04.517237    7460 fix.go:112] recreateIfNeeded on multinode-152500-m02: state=Stopped err=<nil>
	W0421 20:30:04.517237    7460 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 20:30:04.523574    7460 out.go:177] * Restarting existing hyperv VM for "multinode-152500-m02" ...
	I0421 20:30:04.527195    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-152500-m02
	I0421 20:30:07.645278    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:07.645473    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:07.645473    7460 main.go:141] libmachine: Waiting for host to start...
	I0421 20:30:07.645473    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:09.902187    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:09.902187    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:09.902187    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:12.509352    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:12.509352    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:13.525271    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:15.726921    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:15.727737    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:15.727844    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:18.347235    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:18.347423    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:19.361393    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:21.560375    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:21.561168    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:21.561411    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:24.150225    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:24.150545    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:25.159489    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:27.363744    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:27.364042    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:27.364042    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:29.991670    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:29.991816    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:31.005921    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:33.211158    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:33.211674    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:33.211674    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:35.870396    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:35.870396    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:35.872597    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:38.062427    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:38.062427    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:38.062427    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:40.677534    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:40.678188    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:40.678351    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:30:40.681219    7460 machine.go:94] provisionDockerMachine start ...
	I0421 20:30:40.681219    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:42.896316    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:42.896316    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:42.896400    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:45.550779    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:45.550779    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:45.558754    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:30:45.559466    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:30:45.559514    7460 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 20:30:45.697123    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 20:30:45.697200    7460 buildroot.go:166] provisioning hostname "multinode-152500-m02"
	I0421 20:30:45.697257    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:47.903066    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:47.903658    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:47.903748    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:50.547784    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:50.547983    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:50.554605    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:30:50.555185    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:30:50.555185    7460 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-152500-m02 && echo "multinode-152500-m02" | sudo tee /etc/hostname
	I0421 20:30:50.732960    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-152500-m02
	
	I0421 20:30:50.733089    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:52.901289    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:52.901289    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:52.901989    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:55.564355    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:55.564355    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:55.569819    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:30:55.570445    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:30:55.570445    7460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-152500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-152500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-152500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:30:55.733655    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:30:55.733655    7460 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 20:30:55.733655    7460 buildroot.go:174] setting up certificates
	I0421 20:30:55.733655    7460 provision.go:84] configureAuth start
	I0421 20:30:55.735113    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:57.907152    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:57.907691    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:57.907691    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:00.530844    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:00.530844    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:00.530844    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:02.677410    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:02.677410    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:02.678014    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:05.321657    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:05.322561    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:05.322561    7460 provision.go:143] copyHostCerts
	I0421 20:31:05.322779    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 20:31:05.323109    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 20:31:05.323203    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 20:31:05.323797    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 20:31:05.324971    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 20:31:05.325350    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 20:31:05.325350    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 20:31:05.325799    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 20:31:05.326885    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 20:31:05.327196    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 20:31:05.327196    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 20:31:05.327196    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 20:31:05.328591    7460 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-152500-m02 san=[127.0.0.1 172.27.194.200 localhost minikube multinode-152500-m02]
	I0421 20:31:05.495601    7460 provision.go:177] copyRemoteCerts
	I0421 20:31:05.509273    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:31:05.509350    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:07.657926    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:07.657926    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:07.658882    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:10.305774    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:10.305774    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:10.307229    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:31:10.415707    7460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9062924s)
	I0421 20:31:10.415707    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 20:31:10.415973    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:31:10.473145    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 20:31:10.474543    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0421 20:31:10.527399    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 20:31:10.527399    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 20:31:10.577892    7460 provision.go:87] duration metric: took 14.8441292s to configureAuth
	I0421 20:31:10.577892    7460 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:31:10.578636    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:31:10.578636    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:12.730974    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:12.730974    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:12.730974    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:15.336331    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:15.336331    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:15.343450    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:15.344202    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:15.344202    7460 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 20:31:15.487072    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 20:31:15.487072    7460 buildroot.go:70] root file system type: tmpfs
	I0421 20:31:15.487072    7460 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 20:31:15.487072    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:17.637846    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:17.637846    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:17.637846    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:20.299560    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:20.299560    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:20.307370    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:20.307370    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:20.307370    7460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.197.221"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 20:31:20.482126    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.197.221
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 20:31:20.482323    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:22.593553    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:22.593553    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:22.594543    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:25.204431    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:25.205252    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:25.212923    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:25.212923    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:25.212923    7460 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 20:31:27.701078    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 20:31:27.701265    7460 machine.go:97] duration metric: took 47.0196483s to provisionDockerMachine
	I0421 20:31:27.701265    7460 start.go:293] postStartSetup for "multinode-152500-m02" (driver="hyperv")
	I0421 20:31:27.701265    7460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:31:27.716770    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:31:27.716770    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:29.895834    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:29.895834    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:29.896927    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:32.588837    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:32.588837    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:32.589691    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:31:32.706299    7460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9889303s)
	I0421 20:31:32.720200    7460 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:31:32.727323    7460 command_runner.go:130] > NAME=Buildroot
	I0421 20:31:32.727323    7460 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 20:31:32.727323    7460 command_runner.go:130] > ID=buildroot
	I0421 20:31:32.727323    7460 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 20:31:32.727323    7460 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 20:31:32.727419    7460 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:31:32.727419    7460 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 20:31:32.727419    7460 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 20:31:32.728688    7460 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 20:31:32.728762    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 20:31:32.742013    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:31:32.764498    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 20:31:32.815903    7460 start.go:296] duration metric: took 5.1146009s for postStartSetup
	I0421 20:31:32.815903    7460 fix.go:56] duration metric: took 1m30.4466047s for fixHost
	I0421 20:31:32.815903    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:35.009500    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:35.009500    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:35.010305    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:37.657803    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:37.657803    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:37.667216    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:37.667815    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:37.668073    7460 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0421 20:31:37.800632    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713731497.800005993
	
	I0421 20:31:37.800632    7460 fix.go:216] guest clock: 1713731497.800005993
	I0421 20:31:37.800632    7460 fix.go:229] Guest: 2024-04-21 20:31:37.800005993 +0000 UTC Remote: 2024-04-21 20:31:32.8159035 +0000 UTC m=+242.161584501 (delta=4.984102493s)
	I0421 20:31:37.800632    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:39.994754    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:39.994754    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:39.994754    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:42.682915    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:42.683122    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:42.688666    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:42.688870    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:42.688870    7460 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713731497
	I0421 20:31:42.843557    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 20:31:37 UTC 2024
	
	I0421 20:31:42.843658    7460 fix.go:236] clock set: Sun Apr 21 20:31:37 UTC 2024
	 (err=<nil>)
	I0421 20:31:42.843658    7460 start.go:83] releasing machines lock for "multinode-152500-m02", held for 1m40.4748006s
	I0421 20:31:42.843845    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:45.037154    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:45.037154    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:45.037946    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:47.685142    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:47.685142    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:47.688485    7460 out.go:177] * Found network options:
	I0421 20:31:47.693437    7460 out.go:177]   - NO_PROXY=172.27.197.221
	W0421 20:31:47.695723    7460 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 20:31:47.699239    7460 out.go:177]   - NO_PROXY=172.27.197.221
	W0421 20:31:47.701284    7460 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 20:31:47.703274    7460 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 20:31:47.706338    7460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:31:47.706338    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:47.718339    7460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 20:31:47.718339    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:49.935500    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:49.936473    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:49.936942    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:49.938861    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:49.938861    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:49.939040    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:52.648166    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:52.648166    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:52.648166    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:31:52.682670    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:52.682670    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:52.683163    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:31:52.807281    7460 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 20:31:52.808231    7460 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0421 20:31:52.808231    7460 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1018551s)
	I0421 20:31:52.808231    7460 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0898545s)
	W0421 20:31:52.808385    7460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:31:52.823383    7460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:31:52.858965    7460 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0421 20:31:52.859038    7460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:31:52.859038    7460 start.go:494] detecting cgroup driver to use...
	I0421 20:31:52.859300    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:31:52.903577    7460 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0421 20:31:52.919617    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 20:31:52.959526    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 20:31:52.983458    7460 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 20:31:52.997462    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 20:31:53.033520    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:31:53.070130    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 20:31:53.109252    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:31:53.147383    7460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:31:53.185939    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 20:31:53.221064    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 20:31:53.256700    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 20:31:53.294069    7460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:31:53.315538    7460 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 20:31:53.331140    7460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:31:53.369621    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:31:53.622697    7460 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 20:31:53.661857    7460 start.go:494] detecting cgroup driver to use...
	I0421 20:31:53.678922    7460 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 20:31:53.707747    7460 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0421 20:31:53.707747    7460 command_runner.go:130] > [Unit]
	I0421 20:31:53.707747    7460 command_runner.go:130] > Description=Docker Application Container Engine
	I0421 20:31:53.707747    7460 command_runner.go:130] > Documentation=https://docs.docker.com
	I0421 20:31:53.707747    7460 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0421 20:31:53.707747    7460 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0421 20:31:53.707747    7460 command_runner.go:130] > StartLimitBurst=3
	I0421 20:31:53.707747    7460 command_runner.go:130] > StartLimitIntervalSec=60
	I0421 20:31:53.707747    7460 command_runner.go:130] > [Service]
	I0421 20:31:53.707747    7460 command_runner.go:130] > Type=notify
	I0421 20:31:53.707747    7460 command_runner.go:130] > Restart=on-failure
	I0421 20:31:53.707747    7460 command_runner.go:130] > Environment=NO_PROXY=172.27.197.221
	I0421 20:31:53.707747    7460 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0421 20:31:53.707747    7460 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0421 20:31:53.707747    7460 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0421 20:31:53.707747    7460 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0421 20:31:53.707747    7460 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0421 20:31:53.707747    7460 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0421 20:31:53.707747    7460 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0421 20:31:53.707747    7460 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0421 20:31:53.707747    7460 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0421 20:31:53.707747    7460 command_runner.go:130] > ExecStart=
	I0421 20:31:53.707747    7460 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0421 20:31:53.708286    7460 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0421 20:31:53.708286    7460 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0421 20:31:53.708286    7460 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0421 20:31:53.708286    7460 command_runner.go:130] > LimitNOFILE=infinity
	I0421 20:31:53.708286    7460 command_runner.go:130] > LimitNPROC=infinity
	I0421 20:31:53.708286    7460 command_runner.go:130] > LimitCORE=infinity
	I0421 20:31:53.708286    7460 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0421 20:31:53.708384    7460 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0421 20:31:53.708384    7460 command_runner.go:130] > TasksMax=infinity
	I0421 20:31:53.708384    7460 command_runner.go:130] > TimeoutStartSec=0
	I0421 20:31:53.708429    7460 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0421 20:31:53.708429    7460 command_runner.go:130] > Delegate=yes
	I0421 20:31:53.708429    7460 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0421 20:31:53.708429    7460 command_runner.go:130] > KillMode=process
	I0421 20:31:53.708429    7460 command_runner.go:130] > [Install]
	I0421 20:31:53.708429    7460 command_runner.go:130] > WantedBy=multi-user.target
	I0421 20:31:53.722677    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:31:53.769069    7460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:31:53.823796    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:31:53.869309    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:31:53.910260    7460 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 20:31:53.983192    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:31:54.011767    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:31:54.056488    7460 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0421 20:31:54.070351    7460 ssh_runner.go:195] Run: which cri-dockerd
	I0421 20:31:54.078654    7460 command_runner.go:130] > /usr/bin/cri-dockerd
	I0421 20:31:54.092303    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 20:31:54.113929    7460 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 20:31:54.167565    7460 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 20:31:54.405127    7460 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 20:31:54.618102    7460 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 20:31:54.618102    7460 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 20:31:54.672592    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:31:54.897857    7460 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 20:31:57.623997    7460 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7261196s)
	I0421 20:31:57.639941    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 20:31:57.684566    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:31:57.724044    7460 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 20:31:57.952205    7460 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 20:31:58.180378    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:31:58.411080    7460 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 20:31:58.458273    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:31:58.502698    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:31:58.725930    7460 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 20:31:58.858306    7460 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 20:31:58.873723    7460 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 20:31:58.883479    7460 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0421 20:31:58.883479    7460 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 20:31:58.883479    7460 command_runner.go:130] > Device: 0,22	Inode: 852         Links: 1
	I0421 20:31:58.883479    7460 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0421 20:31:58.883479    7460 command_runner.go:130] > Access: 2024-04-21 20:31:58.774879187 +0000
	I0421 20:31:58.883737    7460 command_runner.go:130] > Modify: 2024-04-21 20:31:58.774879187 +0000
	I0421 20:31:58.883737    7460 command_runner.go:130] > Change: 2024-04-21 20:31:58.779879430 +0000
	I0421 20:31:58.883737    7460 command_runner.go:130] >  Birth: -
	I0421 20:31:58.883919    7460 start.go:562] Will wait 60s for crictl version
	I0421 20:31:58.898193    7460 ssh_runner.go:195] Run: which crictl
	I0421 20:31:58.905744    7460 command_runner.go:130] > /usr/bin/crictl
	I0421 20:31:58.919379    7460 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:31:58.987734    7460 command_runner.go:130] > Version:  0.1.0
	I0421 20:31:58.987797    7460 command_runner.go:130] > RuntimeName:  docker
	I0421 20:31:58.987797    7460 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0421 20:31:58.987797    7460 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 20:31:58.987866    7460 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 20:31:58.998748    7460 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:31:59.037006    7460 command_runner.go:130] > 26.0.1
	I0421 20:31:59.050016    7460 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:31:59.092743    7460 command_runner.go:130] > 26.0.1
	I0421 20:31:59.098811    7460 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 20:31:59.101257    7460 out.go:177]   - env NO_PROXY=172.27.197.221
	I0421 20:31:59.103793    7460 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 20:31:59.108758    7460 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 20:31:59.108758    7460 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 20:31:59.108758    7460 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 20:31:59.108758    7460 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 20:31:59.111444    7460 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 20:31:59.111444    7460 ip.go:210] interface addr: 172.27.192.1/20
	I0421 20:31:59.127165    7460 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 20:31:59.135640    7460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:31:59.161393    7460 mustload.go:65] Loading cluster: multinode-152500
	I0421 20:31:59.162861    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:31:59.163365    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:32:01.356507    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:01.356507    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:01.357367    7460 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:32:01.357815    7460 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500 for IP: 172.27.194.200
	I0421 20:32:01.357815    7460 certs.go:194] generating shared ca certs ...
	I0421 20:32:01.357815    7460 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:32:01.358793    7460 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 20:32:01.359125    7460 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 20:32:01.359272    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 20:32:01.359557    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 20:32:01.359733    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 20:32:01.360031    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 20:32:01.360667    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 20:32:01.361071    7460 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 20:32:01.361071    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 20:32:01.361071    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 20:32:01.361754    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 20:32:01.362127    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 20:32:01.362687    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 20:32:01.363056    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:01.363260    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.363260    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.363260    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:32:01.416246    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:32:01.475087    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:32:01.530138    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:32:01.588986    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:32:01.641046    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 20:32:01.695951    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 20:32:01.764305    7460 ssh_runner.go:195] Run: openssl version
	I0421 20:32:01.773655    7460 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 20:32:01.788570    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 20:32:01.821651    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.830863    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.830929    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.844441    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.855992    7460 command_runner.go:130] > 51391683
	I0421 20:32:01.873022    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 20:32:01.909882    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 20:32:01.946597    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.954505    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.954943    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.967755    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.977761    7460 command_runner.go:130] > 3ec20f2e
	I0421 20:32:01.988853    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:32:02.033215    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:32:02.070354    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:02.078501    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:02.078596    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:02.094771    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:02.105594    7460 command_runner.go:130] > b5213941
	I0421 20:32:02.120128    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:32:02.158779    7460 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:32:02.165926    7460 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:32:02.166840    7460 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:32:02.167068    7460 kubeadm.go:928] updating node {m02 172.27.194.200 8443 v1.30.0 docker false true} ...
	I0421 20:32:02.167068    7460 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-152500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.194.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:32:02.182075    7460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:32:02.206399    7460 command_runner.go:130] > kubeadm
	I0421 20:32:02.206509    7460 command_runner.go:130] > kubectl
	I0421 20:32:02.206509    7460 command_runner.go:130] > kubelet
	I0421 20:32:02.206509    7460 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:32:02.220583    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0421 20:32:02.240887    7460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0421 20:32:02.274739    7460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:32:02.329059    7460 ssh_runner.go:195] Run: grep 172.27.197.221	control-plane.minikube.internal$ /etc/hosts
	I0421 20:32:02.337172    7460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.197.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:32:02.380653    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:32:02.604153    7460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:32:02.641945    7460 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:32:02.643166    7460 start.go:316] joinCluster: &{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-152500
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.197.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false
istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:32:02.643367    7460 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:32:02.643433    7460 host.go:66] Checking if "multinode-152500-m02" exists ...
	I0421 20:32:02.644127    7460 mustload.go:65] Loading cluster: multinode-152500
	I0421 20:32:02.644656    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:32:02.645335    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:32:04.879846    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:04.879846    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:04.879846    7460 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:32:04.881501    7460 api_server.go:166] Checking apiserver status ...
	I0421 20:32:04.894749    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:32:04.894749    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:32:07.106073    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:07.106326    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:07.106628    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:32:09.773080    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:32:09.773080    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:09.773472    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:32:09.897593    7460 command_runner.go:130] > 1865
	I0421 20:32:09.897593    7460 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.0028071s)
	I0421 20:32:09.913085    7460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1865/cgroup
	W0421 20:32:09.934993    7460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1865/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:32:09.949205    7460 ssh_runner.go:195] Run: ls
	I0421 20:32:09.958032    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:32:09.965828    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 200:
	ok
	I0421 20:32:09.978838    7460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-152500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0421 20:32:10.170284    7460 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-rkgsx, kube-system/kube-proxy-9zlm5
	I0421 20:32:13.203109    7460 command_runner.go:130] > node/multinode-152500-m02 cordoned
	I0421 20:32:13.203173    7460 command_runner.go:130] > pod "busybox-fc5497c4f-82tdr" has DeletionTimestamp older than 1 seconds, skipping
	I0421 20:32:13.203203    7460 command_runner.go:130] > node/multinode-152500-m02 drained
	I0421 20:32:13.203203    7460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-152500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2243419s)
	I0421 20:32:13.203250    7460 node.go:128] successfully drained node "multinode-152500-m02"
	I0421 20:32:13.203333    7460 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0421 20:32:13.203333    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:32:15.366958    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:15.366958    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:15.366958    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:32:18.032592    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:32:18.033610    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:18.033690    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:32:18.517473    7460 command_runner.go:130] ! W0421 20:32:18.531283    1539 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0421 20:32:19.169586    7460 command_runner.go:130] ! W0421 20:32:19.182808    1539 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 474a506b27d5734448213d877b9514fbf7367bdb20aad63219c64d7241ce01ad: output: E0421 20:32:18.811371    1576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-82tdr_default\" network: cni config uninitialized" podSandboxID="474a506b27d5734448213d877b9514fbf7367bdb20aad63219c64d7241ce01ad"
	I0421 20:32:19.169586    7460 command_runner.go:130] ! time="2024-04-21T20:32:18Z" level=fatal msg="stopping the pod sandbox \"474a506b27d5734448213d877b9514fbf7367bdb20aad63219c64d7241ce01ad\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-82tdr_default\" network: cni config uninitialized"
	I0421 20:32:19.169586    7460 command_runner.go:130] ! : exit status 1
	I0421 20:32:19.205105    7460 command_runner.go:130] > [preflight] Running pre-flight checks
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Stopping the kubelet service
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0421 20:32:19.205393    7460 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0421 20:32:19.205393    7460 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0421 20:32:19.205521    7460 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0421 20:32:19.205521    7460 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0421 20:32:19.205521    7460 command_runner.go:130] > to reset your system's IPVS tables.
	I0421 20:32:19.205521    7460 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0421 20:32:19.205577    7460 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0421 20:32:19.205627    7460 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (6.0022504s)
	I0421 20:32:19.205681    7460 node.go:155] successfully reset node "multinode-152500-m02"
	I0421 20:32:19.207004    7460 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:32:19.207128    7460 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.197.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:32:19.208784    7460 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 20:32:19.209248    7460 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0421 20:32:19.209277    7460 round_trippers.go:463] DELETE https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:19.209277    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:19.209277    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:19.209277    7460 round_trippers.go:473]     Content-Type: application/json
	I0421 20:32:19.209277    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:19.228158    7460 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0421 20:32:19.228630    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:19.228630    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:19.228704    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:19.228704    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:19.228704    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:19.228704    7460 round_trippers.go:580]     Content-Length: 171
	I0421 20:32:19.228704    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:19 GMT
	I0421 20:32:19.228704    7460 round_trippers.go:580]     Audit-Id: e8319bf1-d416-49b4-a060-10ce0eedf4e6
	I0421 20:32:19.228819    7460 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-152500-m02","kind":"nodes","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799"}}
	I0421 20:32:19.228848    7460 node.go:180] successfully deleted node "multinode-152500-m02"
	I0421 20:32:19.228915    7460 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:32:19.228995    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 20:32:19.228995    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:32:21.391409    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:21.391714    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:21.391838    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:32:24.001619    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:32:24.002326    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:24.002512    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:32:24.217262    7460 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 3fngkc.qbfp2gcb61j0uepy --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 
	I0421 20:32:24.217262    7460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9882309s)
	I0421 20:32:24.217262    7460 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:32:24.217262    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3fngkc.qbfp2gcb61j0uepy --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-152500-m02"
	I0421 20:32:24.458913    7460 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:32:25.867157    7460 command_runner.go:130] > [preflight] Running pre-flight checks
	I0421 20:32:25.867157    7460 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0421 20:32:25.867157    7460 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.00168565s
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0421 20:32:25.867157    7460 command_runner.go:130] > This node has joined the cluster:
	I0421 20:32:25.867157    7460 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0421 20:32:25.867157    7460 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0421 20:32:25.867157    7460 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0421 20:32:25.867157    7460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3fngkc.qbfp2gcb61j0uepy --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-152500-m02": (1.6498833s)
	I0421 20:32:25.867157    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 20:32:26.112459    7460 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0421 20:32:26.331943    7460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-152500-m02 minikube.k8s.io/updated_at=2024_04_21T20_32_26_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=multinode-152500 minikube.k8s.io/primary=false
	I0421 20:32:26.466184    7460 command_runner.go:130] > node/multinode-152500-m02 labeled
	I0421 20:32:26.466390    7460 start.go:318] duration metric: took 23.8232102s to joinCluster
	I0421 20:32:26.466544    7460 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:32:26.469391    7460 out.go:177] * Verifying Kubernetes components...
	I0421 20:32:26.467274    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:32:26.487259    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:32:26.728099    7460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:32:26.769003    7460 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:32:26.769983    7460 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.197.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:32:26.770902    7460 node_ready.go:35] waiting up to 6m0s for node "multinode-152500-m02" to be "Ready" ...
	I0421 20:32:26.771091    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:26.771091    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:26.771091    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:26.771091    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:26.775859    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:26.775859    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:26.775859    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:26.775859    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:26.775859    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:26.775859    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:26.775859    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:26 GMT
	I0421 20:32:26.776153    7460 round_trippers.go:580]     Audit-Id: e8b44885-1606-43aa-b0a0-0cd6bb4e1f2b
	I0421 20:32:26.776461    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:27.285149    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:27.285365    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:27.285365    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:27.285365    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:27.289857    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:27.290002    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:27.290002    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:27.290002    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:27.290002    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:27.290114    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:27 GMT
	I0421 20:32:27.290114    7460 round_trippers.go:580]     Audit-Id: 40370ec9-5871-42c7-bd32-2a2700870a9d
	I0421 20:32:27.290114    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:27.290350    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:27.776551    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:27.776628    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:27.776683    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:27.776683    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:27.778939    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:32:27.778939    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:27.778939    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:27.778939    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:27.778939    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:27.778939    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:27.778939    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:27 GMT
	I0421 20:32:27.778939    7460 round_trippers.go:580]     Audit-Id: 8e3a1932-1292-4043-a2d8-6ed0b10ffd0c
	I0421 20:32:27.779888    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:28.284432    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:28.284432    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:28.284432    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:28.284432    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:28.289398    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:28.289398    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:28.289597    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:28.289597    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:28 GMT
	I0421 20:32:28.289597    7460 round_trippers.go:580]     Audit-Id: 2794ba7d-8409-4e04-8b01-81ca9f06a170
	I0421 20:32:28.289597    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:28.289597    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:28.289597    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:28.289962    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:28.771902    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:28.771902    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:28.771902    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:28.771902    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:28.778200    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:32:28.778200    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:28.778200    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:28 GMT
	I0421 20:32:28.778200    7460 round_trippers.go:580]     Audit-Id: ca2f4308-73a3-4dee-ba64-15abe28e771c
	I0421 20:32:28.778200    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:28.778200    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:28.778200    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:28.778200    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:28.778200    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:28.778853    7460 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:32:29.273070    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:29.273136    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:29.273136    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:29.273198    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:29.276994    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:29.277464    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:29.277464    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:29.277464    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:29.277464    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:29.277464    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:29.277464    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:29 GMT
	I0421 20:32:29.277464    7460 round_trippers.go:580]     Audit-Id: a46bac6a-33cd-4cc8-a2d9-0e64a2fd0733
	I0421 20:32:29.277657    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:29.786054    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:29.786054    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:29.786054    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:29.786054    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:29.790703    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:29.790703    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:29.790703    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:29 GMT
	I0421 20:32:29.790703    7460 round_trippers.go:580]     Audit-Id: 04536091-7d9e-40db-bf5f-df1e4b42660b
	I0421 20:32:29.790703    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:29.790703    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:29.790703    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:29.790703    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:29.791069    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:30.272384    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:30.272384    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:30.272384    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:30.272384    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:30.276460    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:30.276460    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:30.276460    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:30.276460    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:30.277208    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:30.277208    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:30 GMT
	I0421 20:32:30.277208    7460 round_trippers.go:580]     Audit-Id: c267f12c-492a-48b1-a284-03cbfb187eee
	I0421 20:32:30.277208    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:30.277208    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:30.772074    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:30.772074    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:30.772074    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:30.772163    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:30.775506    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:30.776301    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:30.776301    7460 round_trippers.go:580]     Audit-Id: 17f4fa98-0eea-473b-9a10-55729f67331f
	I0421 20:32:30.776301    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:30.776301    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:30.776301    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:30.776301    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:30.776301    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:30 GMT
	I0421 20:32:30.776526    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:31.271342    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:31.271342    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:31.271598    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:31.271598    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:31.275856    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:31.276097    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:31.276097    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:31.276097    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:31 GMT
	I0421 20:32:31.276097    7460 round_trippers.go:580]     Audit-Id: 2143ae62-165f-450b-bef1-a95ac34ae7d1
	I0421 20:32:31.276097    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:31.276097    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:31.276097    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:31.276503    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:31.276503    7460 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:32:31.772444    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:31.772444    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:31.772444    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:31.772444    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:31.776662    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:31.776662    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:31.776662    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:31.776662    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:31.776662    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:31.776662    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:31.776662    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:31 GMT
	I0421 20:32:31.776662    7460 round_trippers.go:580]     Audit-Id: d7c9cea8-3f8f-4360-b108-559328b3b916
	I0421 20:32:31.776851    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:32.285917    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:32.285917    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:32.285917    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:32.285917    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:32.289619    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:32.289619    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:32.289619    7460 round_trippers.go:580]     Audit-Id: 3c9b8cbf-ecb8-4a6e-b30d-e47661e9a3c3
	I0421 20:32:32.289619    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:32.289619    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:32.289986    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:32.289986    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:32.289986    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:32 GMT
	I0421 20:32:32.290222    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:32.783494    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:32.783571    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:32.783571    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:32.783571    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:32.787500    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:32.787500    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:32.787500    7460 round_trippers.go:580]     Audit-Id: ee561472-1707-4fb9-b59b-7a69dbd14e5a
	I0421 20:32:32.787500    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:32.787500    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:32.787500    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:32.787500    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:32.787500    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:32 GMT
	I0421 20:32:32.787500    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:33.282752    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:33.283000    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:33.283000    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:33.283134    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:33.286619    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:33.286619    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:33.286619    7460 round_trippers.go:580]     Audit-Id: 9cfe0dc8-65d9-4070-991a-7d2f07239775
	I0421 20:32:33.286897    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:33.286897    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:33.286897    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:33.286897    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:33.286897    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:33 GMT
	I0421 20:32:33.287174    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:33.287949    7460 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:32:33.783243    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:33.783243    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:33.783243    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:33.783243    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:33.786882    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:33.787593    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:33.787593    7460 round_trippers.go:580]     Audit-Id: 850b6f82-fe74-47ce-9b91-b23498514e2c
	I0421 20:32:33.787593    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:33.787593    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:33.787593    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:33.787593    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:33.787593    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:33 GMT
	I0421 20:32:33.787924    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:34.284867    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:34.285179    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.285179    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.285179    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.288534    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.288534    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.288534    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.288534    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.288534    7460 round_trippers.go:580]     Audit-Id: e99fd2cb-d96a-4993-8030-e9576f7eff4e
	I0421 20:32:34.289471    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.289471    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.289471    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.290234    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2111","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3933 chars]
	I0421 20:32:34.290839    7460 node_ready.go:49] node "multinode-152500-m02" has status "Ready":"True"
	I0421 20:32:34.290902    7460 node_ready.go:38] duration metric: took 7.5199459s for node "multinode-152500-m02" to be "Ready" ...
	I0421 20:32:34.290902    7460 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:32:34.291012    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:32:34.291012    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.291012    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.291012    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.299020    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:32:34.299020    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.299020    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.299020    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.299020    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.299020    7460 round_trippers.go:580]     Audit-Id: 5e460b6d-7590-45dc-94c5-85887051c028
	I0421 20:32:34.299214    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.299214    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.301119    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2113"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86160 chars]
	I0421 20:32:34.305790    7460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.305977    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:32:34.305977    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.306042    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.306042    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.308799    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:32:34.308799    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.308799    7460 round_trippers.go:580]     Audit-Id: f61bc0c3-7602-4237-a560-9392a6e1082b
	I0421 20:32:34.308799    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.309309    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.309309    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.309309    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.309309    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.309498    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0421 20:32:34.310166    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:34.310166    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.310222    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.310222    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.312996    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:32:34.312996    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.312996    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.312996    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.312996    7460 round_trippers.go:580]     Audit-Id: c9b60101-1b2f-4a5f-ac40-a9bedf14e455
	I0421 20:32:34.312996    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.312996    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.312996    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.313321    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:34.313740    7460 pod_ready.go:92] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.313740    7460 pod_ready.go:81] duration metric: took 7.8864ms for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.313740    7460 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.313740    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-152500
	I0421 20:32:34.313740    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.313740    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.313740    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.316321    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:32:34.316321    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.316321    7460 round_trippers.go:580]     Audit-Id: f36c02a1-b7a9-4bd3-95fc-9dc8d1a377cb
	I0421 20:32:34.316321    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.316321    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.316321    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.316321    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.316321    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.317350    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"437e0c4d-b43f-48c8-9fee-93e3e8a81c6d","resourceVersion":"1914","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.197.221:2379","kubernetes.io/config.hash":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.mirror":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.seen":"2024-04-21T20:29:40.589799517Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0421 20:32:34.317839    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:34.317918    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.317918    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.317918    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.325289    7460 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:32:34.325289    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.325289    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.325289    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.325289    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.325289    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.325289    7460 round_trippers.go:580]     Audit-Id: 47f345bf-830e-4337-bc1c-452e68ebb1f1
	I0421 20:32:34.325289    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.326144    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:34.326293    7460 pod_ready.go:92] pod "etcd-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.326293    7460 pod_ready.go:81] duration metric: took 12.5532ms for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.326293    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.326293    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-152500
	I0421 20:32:34.326293    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.326293    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.326293    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.332303    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:34.332354    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.332354    7460 round_trippers.go:580]     Audit-Id: d76086e3-694f-48d6-8f06-25b42d015948
	I0421 20:32:34.332354    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.332354    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.332354    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.332354    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.332354    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.332682    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-152500","namespace":"kube-system","uid":"6e73294a-2a7d-4f05-beb1-bb011d5f1f52","resourceVersion":"1911","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.197.221:8443","kubernetes.io/config.hash":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.mirror":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.seen":"2024-04-21T20:29:40.518049422Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0421 20:32:34.333294    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:34.333294    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.333294    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.333294    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.336415    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.336502    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.336502    7460 round_trippers.go:580]     Audit-Id: 2cbe26b1-08ff-430c-85f8-f2d6b45e5842
	I0421 20:32:34.336502    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.336502    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.336502    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.336502    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.336502    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.336727    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:34.337005    7460 pod_ready.go:92] pod "kube-apiserver-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.337005    7460 pod_ready.go:81] duration metric: took 10.7121ms for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.337005    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.337212    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:32:34.337212    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.337212    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.337212    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.340519    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.340519    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.340519    7460 round_trippers.go:580]     Audit-Id: 32834897-dff2-42d4-a7f7-13c52330cbe8
	I0421 20:32:34.340519    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.340519    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.340519    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.340519    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.340519    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.341505    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"1946","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0421 20:32:34.341665    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:34.341665    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.341665    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.341665    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.345332    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.345332    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.345332    7460 round_trippers.go:580]     Audit-Id: 4468f400-bb90-44d1-9d29-847c8e76d5d6
	I0421 20:32:34.345332    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.345332    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.345332    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.345332    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.345332    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.345759    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:34.346211    7460 pod_ready.go:92] pod "kube-controller-manager-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.346211    7460 pod_ready.go:81] duration metric: took 9.2063ms for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.346211    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.488531    7460 request.go:629] Waited for 142.0547ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:32:34.488591    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:32:34.488591    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.488591    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.488591    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.495446    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:32:34.495446    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.495446    7460 round_trippers.go:580]     Audit-Id: 30ccac8c-0846-4f07-af48-114710d5e4a8
	I0421 20:32:34.495446    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.495446    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.495446    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.495446    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.495446    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.495758    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9zlm5","generateName":"kube-proxy-","namespace":"kube-system","uid":"61ba111b-28e9-40db-943d-22a595fdc27e","resourceVersion":"2092","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5842 chars]
	I0421 20:32:34.693875    7460 request.go:629] Waited for 197.9063ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:34.693965    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:34.693965    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.693965    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.694037    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.698181    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.698181    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.698181    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.698181    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.698267    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.698267    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.698267    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.698267    7460 round_trippers.go:580]     Audit-Id: e5ad5945-4e3a-45cb-8f72-d56cc454ab6d
	I0421 20:32:34.698408    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2115","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3813 chars]
	I0421 20:32:34.698851    7460 pod_ready.go:92] pod "kube-proxy-9zlm5" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.698851    7460 pod_ready.go:81] duration metric: took 352.6367ms for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.698851    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.895895    7460 request.go:629] Waited for 196.7928ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:32:34.896061    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:32:34.896185    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.896185    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.896185    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.899596    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.899596    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.900472    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.900472    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.900507    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.900507    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.900507    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.900507    7460 round_trippers.go:580]     Audit-Id: 38d88449-f41a-40f7-8dac-f2ab810dccc9
	I0421 20:32:34.900701    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kl8t2","generateName":"kube-proxy-","namespace":"kube-system","uid":"4154de9b-3a7c-4ed6-a987-82b6d539dc7e","resourceVersion":"1893","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0421 20:32:35.098086    7460 request.go:629] Waited for 196.5585ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:35.098322    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:35.098322    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.098322    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.098322    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.103925    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:35.103925    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.103996    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.103996    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.103996    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.103996    7460 round_trippers.go:580]     Audit-Id: 62b42d63-a3f2-413a-9046-cfad777491fe
	I0421 20:32:35.103996    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.103996    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.104185    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:35.104709    7460 pod_ready.go:92] pod "kube-proxy-kl8t2" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:35.104709    7460 pod_ready.go:81] duration metric: took 405.8553ms for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:35.104709    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:35.288093    7460 request.go:629] Waited for 183.2145ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:32:35.288093    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:32:35.288303    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.288303    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.288303    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.293234    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:35.293449    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.293449    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.293449    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.293449    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.293449    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.293449    7460 round_trippers.go:580]     Audit-Id: 6c3a9768-3440-4669-b3d8-20d7f36eae33
	I0421 20:32:35.293449    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.293898    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sp699","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab29a5-b24b-4d2c-a829-fbf2770ef34c","resourceVersion":"1781","creationTimestamp":"2024-04-21T20:13:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:13:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0421 20:32:35.489801    7460 request.go:629] Waited for 195.016ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:32:35.490036    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:32:35.490036    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.490036    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.490036    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.517787    7460 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0421 20:32:35.518132    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.518132    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.518132    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.518132    7460 round_trippers.go:580]     Audit-Id: 7bcc2f31-f404-44e1-a83d-ddc1b5ed7a0e
	I0421 20:32:35.518132    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.518132    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.518132    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.518636    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m03","uid":"9c2fb882-be16-4c12-815f-4dd3e35c66ee","resourceVersion":"1953","creationTimestamp":"2024-04-21T20:25:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_25_05_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:25:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0421 20:32:35.519134    7460 pod_ready.go:97] node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:32:35.519205    7460 pod_ready.go:81] duration metric: took 414.4934ms for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	E0421 20:32:35.519205    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:32:35.519205    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:35.693604    7460 request.go:629] Waited for 174.1432ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:32:35.693691    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:32:35.693691    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.693691    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.693691    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.697456    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:35.697456    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.697456    7460 round_trippers.go:580]     Audit-Id: 67ea80ea-d2be-4b34-8482-08d7825f3566
	I0421 20:32:35.697456    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.697456    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.697456    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.697456    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.697456    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.697842    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-152500","namespace":"kube-system","uid":"8178553d-7f1d-423a-89e5-41b226b2bb6d","resourceVersion":"1907","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.mirror":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.seen":"2024-04-21T20:05:53.333724913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0421 20:32:35.896861    7460 request.go:629] Waited for 198.1194ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:35.896861    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:35.896861    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.896861    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.896861    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.900475    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:35.900475    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.900475    7460 round_trippers.go:580]     Audit-Id: 8f2956c7-eb6a-43d9-bead-d5705ba6ccb3
	I0421 20:32:35.900999    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.900999    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.900999    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.900999    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.900999    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.904123    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:35.904709    7460 pod_ready.go:92] pod "kube-scheduler-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:35.904850    7460 pod_ready.go:81] duration metric: took 385.6414ms for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:35.904850    7460 pod_ready.go:38] duration metric: took 1.6138733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:32:35.904850    7460 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:32:35.919356    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:32:35.950184    7460 system_svc.go:56] duration metric: took 45.3343ms WaitForService to wait for kubelet
	I0421 20:32:35.950184    7460 kubeadm.go:576] duration metric: took 9.483526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:32:35.950184    7460 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:32:36.099311    7460 request.go:629] Waited for 148.8943ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes
	I0421 20:32:36.099311    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes
	I0421 20:32:36.099311    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:36.099311    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:36.099311    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:36.104103    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:36.104103    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:36.104103    7460 round_trippers.go:580]     Audit-Id: 58c3fa2b-603b-4471-add7-02d84c94417e
	I0421 20:32:36.104103    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:36.104103    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:36.104103    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:36.104103    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:36.104103    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:36 GMT
	I0421 20:32:36.105379    7460 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2118"},"items":[{"metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15489 chars]
	I0421 20:32:36.105910    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:32:36.105910    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:32:36.105910    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:32:36.105910    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:32:36.105910    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:32:36.105910    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:32:36.105910    7460 node_conditions.go:105] duration metric: took 155.7245ms to run NodePressure ...
	I0421 20:32:36.105910    7460 start.go:240] waiting for startup goroutines ...
	I0421 20:32:36.105910    7460 start.go:254] writing updated cluster config ...
	I0421 20:32:36.109963    7460 out.go:177] 
	I0421 20:32:36.114910    7460 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:32:36.122726    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:32:36.122726    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:32:36.129569    7460 out.go:177] * Starting "multinode-152500-m03" worker node in "multinode-152500" cluster
	I0421 20:32:36.131604    7460 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:32:36.131604    7460 cache.go:56] Caching tarball of preloaded images
	I0421 20:32:36.132225    7460 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 20:32:36.132225    7460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 20:32:36.132225    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:32:36.140239    7460 start.go:360] acquireMachinesLock for multinode-152500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:32:36.140443    7460 start.go:364] duration metric: took 98.4µs to acquireMachinesLock for "multinode-152500-m03"
	I0421 20:32:36.140687    7460 start.go:96] Skipping create...Using existing machine configuration
	I0421 20:32:36.140767    7460 fix.go:54] fixHost starting: m03
	I0421 20:32:36.141586    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m03 ).state
	I0421 20:32:38.261989    7460 main.go:141] libmachine: [stdout =====>] : Off
	
	I0421 20:32:38.262248    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:38.262324    7460 fix.go:112] recreateIfNeeded on multinode-152500-m03: state=Stopped err=<nil>
	W0421 20:32:38.262324    7460 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 20:32:38.265870    7460 out.go:177] * Restarting existing hyperv VM for "multinode-152500-m03" ...
	I0421 20:32:38.268292    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-152500-m03

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-152500" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-152500
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-152500: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-152500" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-152500	172.27.198.190
multinode-152500-m02	172.27.195.108
multinode-152500-m03	172.27.193.99

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-152500 -n multinode-152500
E0421 20:32:54.257124   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-152500 -n multinode-152500: (12.8333251s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 logs -n 25: (9.2352657s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-152500 cp testdata\cp-test.txt                                                                                 | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:17 UTC | 21 Apr 24 20:17 UTC |
	|         | multinode-152500-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n                                                                                                  | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:17 UTC | 21 Apr 24 20:17 UTC |
	|         | multinode-152500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt                                                        | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:17 UTC | 21 Apr 24 20:18 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3719690202\001\cp-test_multinode-152500-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n                                                                                                  | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:18 UTC | 21 Apr 24 20:18 UTC |
	|         | multinode-152500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt                                                        | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:18 UTC | 21 Apr 24 20:18 UTC |
	|         | multinode-152500:/home/docker/cp-test_multinode-152500-m02_multinode-152500.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n                                                                                                  | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:18 UTC | 21 Apr 24 20:18 UTC |
	|         | multinode-152500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n multinode-152500 sudo cat                                                                        | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:18 UTC | 21 Apr 24 20:18 UTC |
	|         | /home/docker/cp-test_multinode-152500-m02_multinode-152500.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt                                                        | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:18 UTC | 21 Apr 24 20:19 UTC |
	|         | multinode-152500-m03:/home/docker/cp-test_multinode-152500-m02_multinode-152500-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n                                                                                                  | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:19 UTC | 21 Apr 24 20:19 UTC |
	|         | multinode-152500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n multinode-152500-m03 sudo cat                                                                    | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:19 UTC | 21 Apr 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-152500-m02_multinode-152500-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-152500 cp testdata\cp-test.txt                                                                                 | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:19 UTC | 21 Apr 24 20:19 UTC |
	|         | multinode-152500-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n                                                                                                  | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:19 UTC | 21 Apr 24 20:19 UTC |
	|         | multinode-152500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt                                                        | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:19 UTC | 21 Apr 24 20:19 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3719690202\001\cp-test_multinode-152500-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n                                                                                                  | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:20 UTC | 21 Apr 24 20:20 UTC |
	|         | multinode-152500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt                                                        | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:20 UTC | 21 Apr 24 20:20 UTC |
	|         | multinode-152500:/home/docker/cp-test_multinode-152500-m03_multinode-152500.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n                                                                                                  | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:20 UTC | 21 Apr 24 20:20 UTC |
	|         | multinode-152500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n multinode-152500 sudo cat                                                                        | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:20 UTC | 21 Apr 24 20:20 UTC |
	|         | /home/docker/cp-test_multinode-152500-m03_multinode-152500.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt                                                        | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:20 UTC | 21 Apr 24 20:21 UTC |
	|         | multinode-152500-m02:/home/docker/cp-test_multinode-152500-m03_multinode-152500-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n                                                                                                  | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:21 UTC | 21 Apr 24 20:21 UTC |
	|         | multinode-152500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-152500 ssh -n multinode-152500-m02 sudo cat                                                                    | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:21 UTC | 21 Apr 24 20:21 UTC |
	|         | /home/docker/cp-test_multinode-152500-m03_multinode-152500-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-152500 node stop m03                                                                                           | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:21 UTC | 21 Apr 24 20:21 UTC |
	| node    | multinode-152500 node start                                                                                              | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:22 UTC | 21 Apr 24 20:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-152500                                                                                                 | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:25 UTC |                     |
	| stop    | -p multinode-152500                                                                                                      | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:25 UTC | 21 Apr 24 20:27 UTC |
	| start   | -p multinode-152500                                                                                                      | multinode-152500 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:27 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 20:27:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 20:27:30.836149    7460 out.go:291] Setting OutFile to fd 780 ...
	I0421 20:27:30.837153    7460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:27:30.837153    7460 out.go:304] Setting ErrFile to fd 748...
	I0421 20:27:30.837153    7460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:27:30.867766    7460 out.go:298] Setting JSON to false
	I0421 20:27:30.873064    7460 start.go:129] hostinfo: {"hostname":"minikube6","uptime":17126,"bootTime":1713714124,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 20:27:30.873064    7460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 20:27:30.999605    7460 out.go:177] * [multinode-152500] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 20:27:31.154617    7460 notify.go:220] Checking for updates...
	I0421 20:27:31.199233    7460 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:27:31.347392    7460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 20:27:31.444033    7460 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 20:27:31.609378    7460 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 20:27:31.738376    7460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 20:27:31.855711    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:27:31.855865    7460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 20:27:37.441004    7460 out.go:177] * Using the hyperv driver based on existing profile
	I0421 20:27:37.556213    7460 start.go:297] selected driver: hyperv
	I0421 20:27:37.556732    7460 start.go:901] validating driver "hyperv" against &{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluste
rName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:27:37.556959    7460 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 20:27:37.616262    7460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:27:37.616262    7460 cni.go:84] Creating CNI manager for ""
	I0421 20:27:37.616262    7460 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 20:27:37.616463    7460 start.go:340] cluster config:
	{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.198.190 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:f
alse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:27:37.616463    7460 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 20:27:37.850017    7460 out.go:177] * Starting "multinode-152500" primary control-plane node in "multinode-152500" cluster
	I0421 20:27:38.001415    7460 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:27:38.002628    7460 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 20:27:38.002814    7460 cache.go:56] Caching tarball of preloaded images
	I0421 20:27:38.003218    7460 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 20:27:38.003559    7460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 20:27:38.003906    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:27:38.006976    7460 start.go:360] acquireMachinesLock for multinode-152500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:27:38.007175    7460 start.go:364] duration metric: took 120.6µs to acquireMachinesLock for "multinode-152500"
	I0421 20:27:38.007175    7460 start.go:96] Skipping create...Using existing machine configuration
	I0421 20:27:38.007175    7460 fix.go:54] fixHost starting: 
	I0421 20:27:38.007941    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:27:40.796629    7460 main.go:141] libmachine: [stdout =====>] : Off
	
	I0421 20:27:40.796629    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:40.796965    7460 fix.go:112] recreateIfNeeded on multinode-152500: state=Stopped err=<nil>
	W0421 20:27:40.797030    7460 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 20:27:40.802092    7460 out.go:177] * Restarting existing hyperv VM for "multinode-152500" ...
	I0421 20:27:40.804199    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-152500
	I0421 20:27:43.932256    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:27:43.932685    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:43.932685    7460 main.go:141] libmachine: Waiting for host to start...
	I0421 20:27:43.932685    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:27:46.202224    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:27:46.202404    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:46.202494    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:27:48.787474    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:27:48.787905    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:49.795361    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:27:52.017481    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:27:52.017727    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:52.017836    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:27:54.602569    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:27:54.602621    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:55.602995    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:27:57.824166    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:27:57.824695    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:27:57.824695    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:00.448610    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:28:00.448610    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:01.453914    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:03.637903    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:03.637903    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:03.637903    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:06.212801    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:28:06.213324    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:07.220091    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:09.447995    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:09.447995    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:09.448269    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:12.076402    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:12.076402    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:12.079747    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:14.207419    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:14.207495    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:14.207495    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:16.880216    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:16.880216    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:16.880996    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:28:16.883599    7460 machine.go:94] provisionDockerMachine start ...
	I0421 20:28:16.883599    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:19.048518    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:19.049464    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:19.049464    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:21.699792    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:21.700736    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:21.707110    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:21.707795    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:21.707795    7460 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 20:28:21.855619    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 20:28:21.855720    7460 buildroot.go:166] provisioning hostname "multinode-152500"
	I0421 20:28:21.855720    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:24.037388    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:24.037388    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:24.038181    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:26.699846    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:26.700099    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:26.706170    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:26.706868    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:26.706868    7460 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-152500 && echo "multinode-152500" | sudo tee /etc/hostname
	I0421 20:28:26.886257    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-152500
	
	I0421 20:28:26.886257    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:29.049131    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:29.049572    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:29.049671    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:31.702638    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:31.702638    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:31.710165    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:31.710311    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:31.710311    7460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-152500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-152500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-152500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:28:31.871951    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:28:31.871951    7460 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 20:28:31.871951    7460 buildroot.go:174] setting up certificates
	I0421 20:28:31.871951    7460 provision.go:84] configureAuth start
	I0421 20:28:31.871951    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:34.037047    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:34.037047    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:34.037153    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:36.679209    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:36.679209    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:36.679209    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:38.876126    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:38.876213    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:38.876213    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:41.532261    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:41.532261    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:41.532819    7460 provision.go:143] copyHostCerts
	I0421 20:28:41.532819    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 20:28:41.533324    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 20:28:41.533324    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 20:28:41.533324    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 20:28:41.534976    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 20:28:41.535267    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 20:28:41.535338    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 20:28:41.535771    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 20:28:41.536691    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 20:28:41.536691    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 20:28:41.536691    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 20:28:41.537500    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 20:28:41.537674    7460 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-152500 san=[127.0.0.1 172.27.197.221 localhost minikube multinode-152500]
	I0421 20:28:41.840504    7460 provision.go:177] copyRemoteCerts
	I0421 20:28:41.854358    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:28:41.854455    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:44.023427    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:44.024272    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:44.024272    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:46.675964    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:46.675964    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:46.676724    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:28:46.789409    7460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9350162s)
	I0421 20:28:46.789409    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 20:28:46.789994    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:28:46.842465    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 20:28:46.843145    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0421 20:28:46.896777    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 20:28:46.897321    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0421 20:28:46.946192    7460 provision.go:87] duration metric: took 15.0741318s to configureAuth
	I0421 20:28:46.946192    7460 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:28:46.946894    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:28:46.947060    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:49.121279    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:49.121279    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:49.122089    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:51.804741    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:51.804741    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:51.813802    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:51.814665    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:51.814665    7460 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 20:28:51.959471    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 20:28:51.959592    7460 buildroot.go:70] root file system type: tmpfs
	I0421 20:28:51.959969    7460 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 20:28:51.960135    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:54.154331    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:54.155025    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:54.155171    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:28:56.805005    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:28:56.806024    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:56.814855    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:28:56.815303    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:28:56.815303    7460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 20:28:56.992829    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 20:28:56.992829    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:28:59.153715    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:28:59.153957    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:28:59.154070    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:01.800225    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:01.800469    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:01.810052    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:29:01.810206    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:29:01.810206    7460 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 20:29:04.479887    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 20:29:04.479887    7460 machine.go:97] duration metric: took 47.5959417s to provisionDockerMachine
	I0421 20:29:04.479887    7460 start.go:293] postStartSetup for "multinode-152500" (driver="hyperv")
	I0421 20:29:04.479887    7460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:29:04.495796    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:29:04.495796    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:06.654332    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:06.654528    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:06.654636    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:09.225470    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:09.226306    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:09.226368    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:29:09.344020    7460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8481883s)
	I0421 20:29:09.357755    7460 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:29:09.365191    7460 command_runner.go:130] > NAME=Buildroot
	I0421 20:29:09.365191    7460 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 20:29:09.365191    7460 command_runner.go:130] > ID=buildroot
	I0421 20:29:09.365191    7460 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 20:29:09.365191    7460 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 20:29:09.365191    7460 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:29:09.365191    7460 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 20:29:09.365191    7460 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 20:29:09.365191    7460 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 20:29:09.365191    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 20:29:09.380836    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:29:09.403827    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 20:29:09.457424    7460 start.go:296] duration metric: took 4.9775008s for postStartSetup
	I0421 20:29:09.457692    7460 fix.go:56] duration metric: took 1m31.449852s for fixHost
	I0421 20:29:09.457812    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:11.577440    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:11.577492    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:11.577492    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:14.183266    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:14.183266    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:14.190440    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:29:14.191109    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:29:14.191109    7460 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 20:29:14.332423    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713731354.337368033
	
	I0421 20:29:14.332423    7460 fix.go:216] guest clock: 1713731354.337368033
	I0421 20:29:14.332423    7460 fix.go:229] Guest: 2024-04-21 20:29:14.337368033 +0000 UTC Remote: 2024-04-21 20:29:09.457777 +0000 UTC m=+98.804506201 (delta=4.879591033s)
	I0421 20:29:14.332596    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:16.478711    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:16.478711    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:16.478858    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:19.058323    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:19.058379    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:19.066574    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:29:19.067058    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.197.221 22 <nil> <nil>}
	I0421 20:29:19.067154    7460 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713731354
	I0421 20:29:19.231299    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 20:29:14 UTC 2024
	
	I0421 20:29:19.231299    7460 fix.go:236] clock set: Sun Apr 21 20:29:14 UTC 2024
	 (err=<nil>)
	I0421 20:29:19.231299    7460 start.go:83] releasing machines lock for "multinode-152500", held for 1m41.2233877s
	I0421 20:29:19.231648    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:21.382240    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:21.382240    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:21.382240    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:23.997140    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:23.997516    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:24.001950    7460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:29:24.002027    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:24.014425    7460 ssh_runner.go:195] Run: cat /version.json
	I0421 20:29:24.014425    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:29:26.210810    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:26.210810    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:26.211475    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:26.222037    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:29:26.222037    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:26.222037    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:29:28.963571    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:28.963571    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:28.964199    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:29:28.996407    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:29:28.997334    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:29:28.997665    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:29:29.062039    7460 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0421 20:29:29.062242    7460 ssh_runner.go:235] Completed: cat /version.json: (5.0477797s)
	I0421 20:29:29.075567    7460 ssh_runner.go:195] Run: systemctl --version
	I0421 20:29:29.180742    7460 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 20:29:29.180855    7460 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1788659s)
	I0421 20:29:29.180931    7460 command_runner.go:130] > systemd 252 (252)
	I0421 20:29:29.180989    7460 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0421 20:29:29.194263    7460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 20:29:29.203039    7460 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0421 20:29:29.203788    7460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:29:29.217411    7460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:29:29.248486    7460 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0421 20:29:29.249448    7460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:29:29.249533    7460 start.go:494] detecting cgroup driver to use...
	I0421 20:29:29.249838    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:29:29.286676    7460 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0421 20:29:29.304163    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 20:29:29.343352    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 20:29:29.364847    7460 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 20:29:29.381897    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 20:29:29.418042    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:29:29.459121    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 20:29:29.494626    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:29:29.530585    7460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:29:29.568679    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 20:29:29.603356    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 20:29:29.641053    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 20:29:29.679587    7460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:29:29.702062    7460 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 20:29:29.715363    7460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:29:29.754036    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:29.989172    7460 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 20:29:30.028182    7460 start.go:494] detecting cgroup driver to use...
	I0421 20:29:30.042286    7460 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 20:29:30.068584    7460 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0421 20:29:30.069211    7460 command_runner.go:130] > [Unit]
	I0421 20:29:30.069211    7460 command_runner.go:130] > Description=Docker Application Container Engine
	I0421 20:29:30.069211    7460 command_runner.go:130] > Documentation=https://docs.docker.com
	I0421 20:29:30.069211    7460 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0421 20:29:30.069211    7460 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0421 20:29:30.069211    7460 command_runner.go:130] > StartLimitBurst=3
	I0421 20:29:30.069211    7460 command_runner.go:130] > StartLimitIntervalSec=60
	I0421 20:29:30.069339    7460 command_runner.go:130] > [Service]
	I0421 20:29:30.069339    7460 command_runner.go:130] > Type=notify
	I0421 20:29:30.069339    7460 command_runner.go:130] > Restart=on-failure
	I0421 20:29:30.069339    7460 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0421 20:29:30.069339    7460 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0421 20:29:30.069339    7460 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0421 20:29:30.069339    7460 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0421 20:29:30.069339    7460 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0421 20:29:30.069339    7460 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0421 20:29:30.069339    7460 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0421 20:29:30.069339    7460 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0421 20:29:30.069339    7460 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0421 20:29:30.069533    7460 command_runner.go:130] > ExecStart=
	I0421 20:29:30.069579    7460 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0421 20:29:30.069606    7460 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0421 20:29:30.069606    7460 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0421 20:29:30.069606    7460 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0421 20:29:30.069606    7460 command_runner.go:130] > LimitNOFILE=infinity
	I0421 20:29:30.069606    7460 command_runner.go:130] > LimitNPROC=infinity
	I0421 20:29:30.069679    7460 command_runner.go:130] > LimitCORE=infinity
	I0421 20:29:30.069710    7460 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0421 20:29:30.069710    7460 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0421 20:29:30.069710    7460 command_runner.go:130] > TasksMax=infinity
	I0421 20:29:30.069710    7460 command_runner.go:130] > TimeoutStartSec=0
	I0421 20:29:30.069710    7460 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0421 20:29:30.069710    7460 command_runner.go:130] > Delegate=yes
	I0421 20:29:30.069801    7460 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0421 20:29:30.069832    7460 command_runner.go:130] > KillMode=process
	I0421 20:29:30.069832    7460 command_runner.go:130] > [Install]
	I0421 20:29:30.069886    7460 command_runner.go:130] > WantedBy=multi-user.target
	I0421 20:29:30.086342    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:29:30.126233    7460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:29:30.194615    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:29:30.236538    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:29:30.278341    7460 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 20:29:30.351369    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:29:30.379191    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:29:30.419070    7460 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0421 20:29:30.432510    7460 ssh_runner.go:195] Run: which cri-dockerd
	I0421 20:29:30.440042    7460 command_runner.go:130] > /usr/bin/cri-dockerd
	I0421 20:29:30.453981    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 20:29:30.475542    7460 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 20:29:30.528275    7460 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 20:29:30.771084    7460 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 20:29:31.010761    7460 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 20:29:31.010761    7460 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 20:29:31.071550    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:31.322951    7460 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 20:29:34.031390    7460 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.708419s)
	I0421 20:29:34.049271    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 20:29:34.090397    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:29:34.131042    7460 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 20:29:34.378216    7460 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 20:29:34.612845    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:34.852624    7460 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 20:29:34.897992    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:29:34.940138    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:35.167304    7460 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 20:29:35.297563    7460 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 20:29:35.310546    7460 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 20:29:35.325248    7460 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0421 20:29:35.325248    7460 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 20:29:35.325248    7460 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0421 20:29:35.325526    7460 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0421 20:29:35.325577    7460 command_runner.go:130] > Access: 2024-04-21 20:29:35.205310822 +0000
	I0421 20:29:35.325577    7460 command_runner.go:130] > Modify: 2024-04-21 20:29:35.205310822 +0000
	I0421 20:29:35.325610    7460 command_runner.go:130] > Change: 2024-04-21 20:29:35.210310842 +0000
	I0421 20:29:35.325610    7460 command_runner.go:130] >  Birth: -
	I0421 20:29:35.325723    7460 start.go:562] Will wait 60s for crictl version
	I0421 20:29:35.340079    7460 ssh_runner.go:195] Run: which crictl
	I0421 20:29:35.346375    7460 command_runner.go:130] > /usr/bin/crictl
	I0421 20:29:35.359579    7460 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:29:35.424259    7460 command_runner.go:130] > Version:  0.1.0
	I0421 20:29:35.425335    7460 command_runner.go:130] > RuntimeName:  docker
	I0421 20:29:35.425335    7460 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0421 20:29:35.425335    7460 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 20:29:35.425387    7460 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 20:29:35.436886    7460 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:29:35.470737    7460 command_runner.go:130] > 26.0.1
	I0421 20:29:35.484332    7460 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:29:35.518238    7460 command_runner.go:130] > 26.0.1
	I0421 20:29:35.522080    7460 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 20:29:35.522080    7460 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 20:29:35.527431    7460 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 20:29:35.527431    7460 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 20:29:35.527431    7460 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 20:29:35.527431    7460 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 20:29:35.532045    7460 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 20:29:35.532092    7460 ip.go:210] interface addr: 172.27.192.1/20
	I0421 20:29:35.547086    7460 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 20:29:35.554368    7460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:29:35.579396    7460 kubeadm.go:877] updating cluster {Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-1
52500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.197.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0421 20:29:35.579696    7460 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:29:35.590556    7460 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 20:29:35.617594    7460 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0421 20:29:35.617594    7460 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0421 20:29:35.618318    7460 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0421 20:29:35.618318    7460 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0421 20:29:35.618318    7460 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:29:35.618318    7460 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0421 20:29:35.618608    7460 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0421 20:29:35.618608    7460 docker.go:615] Images already preloaded, skipping extraction
	I0421 20:29:35.630860    7460 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0421 20:29:35.657169    7460 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0421 20:29:35.657169    7460 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0421 20:29:35.657169    7460 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0421 20:29:35.657169    7460 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0421 20:29:35.657169    7460 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0421 20:29:35.657169    7460 cache_images.go:84] Images are preloaded, skipping loading
	I0421 20:29:35.657169    7460 kubeadm.go:928] updating node { 172.27.197.221 8443 v1.30.0 docker true true} ...
	I0421 20:29:35.657169    7460 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-152500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.197.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:29:35.667997    7460 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0421 20:29:35.706956    7460 command_runner.go:130] > cgroupfs
	I0421 20:29:35.707001    7460 cni.go:84] Creating CNI manager for ""
	I0421 20:29:35.707001    7460 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 20:29:35.707001    7460 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0421 20:29:35.707001    7460 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.197.221 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-152500 NodeName:multinode-152500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.197.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.197.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0421 20:29:35.707539    7460 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.197.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-152500"
	  kubeletExtraArgs:
	    node-ip: 172.27.197.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.197.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0421 20:29:35.722438    7460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:29:35.745977    7460 command_runner.go:130] > kubeadm
	I0421 20:29:35.745977    7460 command_runner.go:130] > kubectl
	I0421 20:29:35.745977    7460 command_runner.go:130] > kubelet
	I0421 20:29:35.746071    7460 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:29:35.760190    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0421 20:29:35.784056    7460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0421 20:29:35.822464    7460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:29:35.862860    7460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0421 20:29:35.917518    7460 ssh_runner.go:195] Run: grep 172.27.197.221	control-plane.minikube.internal$ /etc/hosts
	I0421 20:29:35.930181    7460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.197.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:29:35.974833    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:36.198145    7460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:29:36.230156    7460 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500 for IP: 172.27.197.221
	I0421 20:29:36.230156    7460 certs.go:194] generating shared ca certs ...
	I0421 20:29:36.230320    7460 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:36.230921    7460 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 20:29:36.231268    7460 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 20:29:36.231415    7460 certs.go:256] generating profile certs ...
	I0421 20:29:36.232154    7460 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\client.key
	I0421 20:29:36.232357    7460 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.1d1b6fdd
	I0421 20:29:36.232357    7460 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.1d1b6fdd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.27.197.221]
	I0421 20:29:36.404379    7460 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.1d1b6fdd ...
	I0421 20:29:36.404379    7460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.1d1b6fdd: {Name:mk151e37b2e5f23f4357e1c585ea50dfc55dbfb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:36.406331    7460 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.1d1b6fdd ...
	I0421 20:29:36.406331    7460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.1d1b6fdd: {Name:mkb0d5b8b39d1bdc0398c0c1cb49a0cc404c6b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:36.407372    7460 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt.1d1b6fdd -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt
	I0421 20:29:36.421322    7460 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key.1d1b6fdd -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key
	I0421 20:29:36.422747    7460 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key
	I0421 20:29:36.422747    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 20:29:36.422747    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 20:29:36.423165    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 20:29:36.423388    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 20:29:36.423528    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0421 20:29:36.423580    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0421 20:29:36.423896    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0421 20:29:36.425093    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0421 20:29:36.425516    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 20:29:36.425984    7460 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 20:29:36.425984    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 20:29:36.425984    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 20:29:36.426558    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 20:29:36.426926    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 20:29:36.427383    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 20:29:36.427666    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 20:29:36.427986    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 20:29:36.428232    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:36.429858    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:29:36.492937    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:29:36.562537    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:29:36.616734    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:29:36.668328    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0421 20:29:36.726384    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0421 20:29:36.781787    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0421 20:29:36.839237    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0421 20:29:36.890184    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 20:29:36.940090    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 20:29:36.989872    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:29:37.043671    7460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0421 20:29:37.095693    7460 ssh_runner.go:195] Run: openssl version
	I0421 20:29:37.104510    7460 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 20:29:37.119112    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 20:29:37.154406    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 20:29:37.162143    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:29:37.162143    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:29:37.176263    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 20:29:37.186413    7460 command_runner.go:130] > 3ec20f2e
	I0421 20:29:37.202562    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:29:37.240126    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:29:37.276546    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:37.285878    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:37.285967    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:37.299427    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:29:37.311059    7460 command_runner.go:130] > b5213941
	I0421 20:29:37.325728    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:29:37.360767    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 20:29:37.403042    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 20:29:37.411766    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:29:37.411766    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:29:37.426479    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 20:29:37.437451    7460 command_runner.go:130] > 51391683
	I0421 20:29:37.451209    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 20:29:37.489747    7460 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:29:37.496498    7460 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:29:37.496498    7460 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0421 20:29:37.496498    7460 command_runner.go:130] > Device: 8,1	Inode: 531538      Links: 1
	I0421 20:29:37.496498    7460 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 20:29:37.496498    7460 command_runner.go:130] > Access: 2024-04-21 20:05:40.258227500 +0000
	I0421 20:29:37.496498    7460 command_runner.go:130] > Modify: 2024-04-21 20:05:40.258227500 +0000
	I0421 20:29:37.496498    7460 command_runner.go:130] > Change: 2024-04-21 20:05:40.258227500 +0000
	I0421 20:29:37.496498    7460 command_runner.go:130] >  Birth: 2024-04-21 20:05:40.258227500 +0000
	I0421 20:29:37.511320    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0421 20:29:37.521184    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.535632    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0421 20:29:37.545997    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.559071    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0421 20:29:37.570595    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.583622    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0421 20:29:37.594874    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.608548    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0421 20:29:37.619856    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.633154    7460 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0421 20:29:37.643354    7460 command_runner.go:130] > Certificate will not expire
	I0421 20:29:37.643921    7460 kubeadm.go:391] StartCluster: {Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-1525
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.197.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.195.108 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:29:37.655586    7460 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 20:29:37.697329    7460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0421 20:29:37.719657    7460 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0421 20:29:37.719754    7460 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0421 20:29:37.719754    7460 command_runner.go:130] > /var/lib/minikube/etcd:
	I0421 20:29:37.719754    7460 command_runner.go:130] > member
	W0421 20:29:37.719831    7460 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0421 20:29:37.719831    7460 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0421 20:29:37.719910    7460 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0421 20:29:37.734242    7460 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0421 20:29:37.756788    7460 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:29:37.758174    7460 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-152500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:29:37.759147    7460 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-152500" cluster setting kubeconfig missing "multinode-152500" context setting]
	I0421 20:29:37.760127    7460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:37.775376    7460 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:29:37.776553    7460 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.197.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:29:37.778179    7460 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 20:29:37.792760    7460 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0421 20:29:37.813454    7460 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0421 20:29:37.813519    7460 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0421 20:29:37.813519    7460 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0421 20:29:37.813519    7460 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0421 20:29:37.813519    7460 command_runner.go:130] >  kind: InitConfiguration
	I0421 20:29:37.813759    7460 command_runner.go:130] >  localAPIEndpoint:
	I0421 20:29:37.813759    7460 command_runner.go:130] > -  advertiseAddress: 172.27.198.190
	I0421 20:29:37.813759    7460 command_runner.go:130] > +  advertiseAddress: 172.27.197.221
	I0421 20:29:37.813759    7460 command_runner.go:130] >    bindPort: 8443
	I0421 20:29:37.813759    7460 command_runner.go:130] >  bootstrapTokens:
	I0421 20:29:37.813759    7460 command_runner.go:130] >    - groups:
	I0421 20:29:37.813849    7460 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0421 20:29:37.813849    7460 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0421 20:29:37.813849    7460 command_runner.go:130] >    name: "multinode-152500"
	I0421 20:29:37.813934    7460 command_runner.go:130] >    kubeletExtraArgs:
	I0421 20:29:37.813934    7460 command_runner.go:130] > -    node-ip: 172.27.198.190
	I0421 20:29:37.813934    7460 command_runner.go:130] > +    node-ip: 172.27.197.221
	I0421 20:29:37.813934    7460 command_runner.go:130] >    taints: []
	I0421 20:29:37.813934    7460 command_runner.go:130] >  ---
	I0421 20:29:37.814106    7460 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0421 20:29:37.814106    7460 command_runner.go:130] >  kind: ClusterConfiguration
	I0421 20:29:37.814106    7460 command_runner.go:130] >  apiServer:
	I0421 20:29:37.814106    7460 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.27.198.190"]
	I0421 20:29:37.814106    7460 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.27.197.221"]
	I0421 20:29:37.814106    7460 command_runner.go:130] >    extraArgs:
	I0421 20:29:37.814106    7460 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0421 20:29:37.814106    7460 command_runner.go:130] >  controllerManager:
	I0421 20:29:37.814106    7460 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.27.198.190
	+  advertiseAddress: 172.27.197.221
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-152500"
	   kubeletExtraArgs:
	-    node-ip: 172.27.198.190
	+    node-ip: 172.27.197.221
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.27.198.190"]
	+  certSANs: ["127.0.0.1", "localhost", "172.27.197.221"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0421 20:29:37.814106    7460 kubeadm.go:1154] stopping kube-system containers ...
	I0421 20:29:37.827484    7460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0421 20:29:37.859712    7460 command_runner.go:130] > a6fab3c7e281
	I0421 20:29:37.859712    7460 command_runner.go:130] > bc85f90f7b18
	I0421 20:29:37.859712    7460 command_runner.go:130] > c9a9145e83af
	I0421 20:29:37.859712    7460 command_runner.go:130] > d6ef972126a9
	I0421 20:29:37.859712    7460 command_runner.go:130] > ad328e25a9d0
	I0421 20:29:37.859712    7460 command_runner.go:130] > 7f128889bd61
	I0421 20:29:37.859712    7460 command_runner.go:130] > 0e66350415f0
	I0421 20:29:37.859712    7460 command_runner.go:130] > a3675838aa7c
	I0421 20:29:37.859712    7460 command_runner.go:130] > 7ecc14e6d519
	I0421 20:29:37.859712    7460 command_runner.go:130] > eb483e47dc21
	I0421 20:29:37.859712    7460 command_runner.go:130] > 0bd5af3b1831
	I0421 20:29:37.859712    7460 command_runner.go:130] > 0690342790fe
	I0421 20:29:37.859712    7460 command_runner.go:130] > 5a55ab72d84e
	I0421 20:29:37.859712    7460 command_runner.go:130] > b0eb5fe00481
	I0421 20:29:37.859712    7460 command_runner.go:130] > 6dd47a357dc9
	I0421 20:29:37.859712    7460 command_runner.go:130] > e6ae7d993bb9
	I0421 20:29:37.862946    7460 docker.go:483] Stopping containers: [a6fab3c7e281 bc85f90f7b18 c9a9145e83af d6ef972126a9 ad328e25a9d0 7f128889bd61 0e66350415f0 a3675838aa7c 7ecc14e6d519 eb483e47dc21 0bd5af3b1831 0690342790fe 5a55ab72d84e b0eb5fe00481 6dd47a357dc9 e6ae7d993bb9]
	I0421 20:29:37.873667    7460 ssh_runner.go:195] Run: docker stop a6fab3c7e281 bc85f90f7b18 c9a9145e83af d6ef972126a9 ad328e25a9d0 7f128889bd61 0e66350415f0 a3675838aa7c 7ecc14e6d519 eb483e47dc21 0bd5af3b1831 0690342790fe 5a55ab72d84e b0eb5fe00481 6dd47a357dc9 e6ae7d993bb9
	I0421 20:29:37.901109    7460 command_runner.go:130] > a6fab3c7e281
	I0421 20:29:37.901109    7460 command_runner.go:130] > bc85f90f7b18
	I0421 20:29:37.901109    7460 command_runner.go:130] > c9a9145e83af
	I0421 20:29:37.901109    7460 command_runner.go:130] > d6ef972126a9
	I0421 20:29:37.901109    7460 command_runner.go:130] > ad328e25a9d0
	I0421 20:29:37.901109    7460 command_runner.go:130] > 7f128889bd61
	I0421 20:29:37.901109    7460 command_runner.go:130] > 0e66350415f0
	I0421 20:29:37.901109    7460 command_runner.go:130] > a3675838aa7c
	I0421 20:29:37.901109    7460 command_runner.go:130] > 7ecc14e6d519
	I0421 20:29:37.901109    7460 command_runner.go:130] > eb483e47dc21
	I0421 20:29:37.901109    7460 command_runner.go:130] > 0bd5af3b1831
	I0421 20:29:37.901109    7460 command_runner.go:130] > 0690342790fe
	I0421 20:29:37.901109    7460 command_runner.go:130] > 5a55ab72d84e
	I0421 20:29:37.901109    7460 command_runner.go:130] > b0eb5fe00481
	I0421 20:29:37.901109    7460 command_runner.go:130] > 6dd47a357dc9
	I0421 20:29:37.901109    7460 command_runner.go:130] > e6ae7d993bb9
	I0421 20:29:37.918633    7460 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0421 20:29:37.965924    7460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0421 20:29:37.986714    7460 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0421 20:29:37.986916    7460 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0421 20:29:37.986916    7460 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0421 20:29:37.986916    7460 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:29:37.987018    7460 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0421 20:29:37.987089    7460 kubeadm.go:156] found existing configuration files:
	
	I0421 20:29:38.000500    7460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0421 20:29:38.018614    7460 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:29:38.019067    7460 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0421 20:29:38.033002    7460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0421 20:29:38.070050    7460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0421 20:29:38.090150    7460 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:29:38.090306    7460 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0421 20:29:38.102851    7460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0421 20:29:38.136742    7460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0421 20:29:38.156682    7460 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:29:38.156682    7460 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0421 20:29:38.168669    7460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0421 20:29:38.209236    7460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0421 20:29:38.231178    7460 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:29:38.231178    7460 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0421 20:29:38.244170    7460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0421 20:29:38.277227    7460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0421 20:29:38.297987    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:38.582552    7460 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0421 20:29:38.582678    7460 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0421 20:29:38.582678    7460 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0421 20:29:38.582749    7460 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0421 20:29:38.582818    7460 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0421 20:29:38.582818    7460 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0421 20:29:38.582843    7460 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0421 20:29:38.582843    7460 command_runner.go:130] > [certs] Using the existing "sa" key
	I0421 20:29:38.582891    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:39.974290    7460 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0421 20:29:39.974405    7460 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0421 20:29:39.974405    7460 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0421 20:29:39.974470    7460 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0421 20:29:39.974470    7460 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0421 20:29:39.974547    7460 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0421 20:29:39.974547    7460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3915727s)
	I0421 20:29:39.974632    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:40.321904    7460 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:29:40.321978    7460 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:29:40.322043    7460 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0421 20:29:40.322043    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:40.437343    7460 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0421 20:29:40.437379    7460 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0421 20:29:40.437379    7460 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0421 20:29:40.437379    7460 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0421 20:29:40.437379    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:40.570386    7460 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0421 20:29:40.570478    7460 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:29:40.584429    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:41.087967    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:41.593558    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:42.088012    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:42.595185    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:29:42.627196    7460 command_runner.go:130] > 1865
	I0421 20:29:42.627196    7460 api_server.go:72] duration metric: took 2.0567023s to wait for apiserver process to appear ...
	I0421 20:29:42.627196    7460 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:29:42.627196    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:46.333263    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:29:46.333666    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:29:46.333736    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:46.388953    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0421 20:29:46.389557    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0421 20:29:46.628371    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:46.638351    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:29:46.638449    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:29:47.136338    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:47.143948    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:29:47.143948    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:29:47.641670    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:47.649675    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0421 20:29:47.649675    7460 api_server.go:103] status: https://172.27.197.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0421 20:29:48.141885    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:29:48.148620    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 200:
	ok
	I0421 20:29:48.149744    7460 round_trippers.go:463] GET https://172.27.197.221:8443/version
	I0421 20:29:48.149904    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:48.149904    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:48.149904    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:48.163084    7460 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 20:29:48.163084    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:48.163084    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Content-Length: 263
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:48 GMT
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Audit-Id: 848e06fe-0510-4529-a147-ba67c906e378
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:48.163084    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:48.163084    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:48.163084    7460 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0421 20:29:48.163084    7460 api_server.go:141] control plane version: v1.30.0
	I0421 20:29:48.163084    7460 api_server.go:131] duration metric: took 5.5358475s to wait for apiserver health ...
	I0421 20:29:48.163084    7460 cni.go:84] Creating CNI manager for ""
	I0421 20:29:48.163084    7460 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0421 20:29:48.166720    7460 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0421 20:29:48.181250    7460 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0421 20:29:48.191248    7460 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0421 20:29:48.191340    7460 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0421 20:29:48.191340    7460 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0421 20:29:48.191340    7460 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0421 20:29:48.191340    7460 command_runner.go:130] > Access: 2024-04-21 20:28:10.782547100 +0000
	I0421 20:29:48.191340    7460 command_runner.go:130] > Modify: 2024-04-18 23:25:47.000000000 +0000
	I0421 20:29:48.191340    7460 command_runner.go:130] > Change: 2024-04-21 20:28:01.443000000 +0000
	I0421 20:29:48.191340    7460 command_runner.go:130] >  Birth: -
	I0421 20:29:48.191523    7460 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0421 20:29:48.191614    7460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0421 20:29:48.244292    7460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0421 20:29:49.150827    7460 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0421 20:29:49.151603    7460 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0421 20:29:49.151603    7460 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0421 20:29:49.151603    7460 command_runner.go:130] > daemonset.apps/kindnet configured
	I0421 20:29:49.151647    7460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:29:49.151908    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:29:49.151950    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.151950    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.151950    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.159729    7460 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:29:49.159729    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.159729    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.159729    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.159729    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.159729    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.159729    7460 round_trippers.go:580]     Audit-Id: 4ab92c78-a462-4ce6-8a25-8aa97036617a
	I0421 20:29:49.159729    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.162047    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1859"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 85368 chars]
	I0421 20:29:49.168613    7460 system_pods.go:59] 12 kube-system pods found
	I0421 20:29:49.169551    7460 system_pods.go:61] "coredns-7db6d8ff4d-v7pf8" [2973ebed-006d-4495-b1a7-7b4472e46f23] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "etcd-multinode-152500" [e5f399f5-b04e-4ac1-8646-d103d2d8f74a] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kindnet-kvd8z" [e6d4f203-892a-4a67-a6aa-38161a3749da] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kindnet-rkgsx" [ba1febf0-40e8-4a24-83e0-cbb9f6c01e34] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kindnet-vb8ws" [3e6ed0fc-724b-4a52-9738-2d9ef84b57eb] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-apiserver-multinode-152500" [52744df0-77af-4caf-b69d-af2789c25eab] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-controller-manager-multinode-152500" [a3e58103-1fb6-4e2f-aa47-76f3a8cfd758] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-proxy-9zlm5" [61ba111b-28e9-40db-943d-22a595fdc27e] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-proxy-kl8t2" [4154de9b-3a7c-4ed6-a987-82b6d539dc7e] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-proxy-sp699" [8eab29a5-b24b-4d2c-a829-fbf2770ef34c] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "kube-scheduler-multinode-152500" [8178553d-7f1d-423a-89e5-41b226b2bb6d] Running
	I0421 20:29:49.169551    7460 system_pods.go:61] "storage-provisioner" [2eea731d-6a0b-4404-8518-a088d879b487] Running
	I0421 20:29:49.169551    7460 system_pods.go:74] duration metric: took 17.9034ms to wait for pod list to return data ...
	I0421 20:29:49.169551    7460 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:29:49.169551    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes
	I0421 20:29:49.169551    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.169551    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.169551    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.175274    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:29:49.175274    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.176205    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.176246    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.176246    7460 round_trippers.go:580]     Audit-Id: 05f98e78-cd0c-4372-a45e-a3068abc31c7
	I0421 20:29:49.176246    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.176246    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.176246    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.176475    7460 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1859"},"items":[{"metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16289 chars]
	I0421 20:29:49.178208    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:29:49.178208    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:29:49.178286    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:29:49.178286    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:29:49.178286    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:29:49.178286    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:29:49.178337    7460 node_conditions.go:105] duration metric: took 8.7349ms to run NodePressure ...
	I0421 20:29:49.178337    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0421 20:29:49.543082    7460 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0421 20:29:49.543140    7460 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0421 20:29:49.543140    7460 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0421 20:29:49.543382    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0421 20:29:49.543443    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.543443    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.543472    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.577045    7460 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0421 20:29:49.577045    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.577045    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.577045    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.577045    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.577045    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.577045    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.577045    7460 round_trippers.go:580]     Audit-Id: 6e6b2985-a2c0-40a8-a94d-a8209578e4a2
	I0421 20:29:49.579061    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1865"},"items":[{"metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"e5f399f5-b04e-4ac1-8646-d103d2d8f74a","resourceVersion":"1863","creationTimestamp":"2024-04-21T20:05:53Z","deletionTimestamp":"2024-04-21T20:29:49Z","deletionGracePeriodSeconds":0,"labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.198.190:2379","kubernetes.io/config.hash":"5b332f12d2b025e34ff6a19060e65329","kubernetes.io/config.mirror":"5b332f12d2b025e34ff6a19060e65329","kubernetes.io/config.seen":"2024-04-21T20:05:53.333716613Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-2
1T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot [truncated 29325 chars]
	I0421 20:29:49.580806    7460 retry.go:31] will retry after 214.399221ms: kubelet not initialised
	I0421 20:29:49.796544    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0421 20:29:49.796588    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.796617    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.796617    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.808267    7460 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0421 20:29:49.808267    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.808267    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.808267    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.808267    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.808267    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.808267    7460 round_trippers.go:580]     Audit-Id: 05985484-8311-4706-803a-1a3e1f5d110d
	I0421 20:29:49.808267    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.810004    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1878"},"items":[{"metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"437e0c4d-b43f-48c8-9fee-93e3e8a81c6d","resourceVersion":"1873","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.197.221:2379","kubernetes.io/config.hash":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.mirror":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.seen":"2024-04-21T20:29:40.589799517Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0421 20:29:49.811805    7460 kubeadm.go:733] kubelet initialised
	I0421 20:29:49.811864    7460 kubeadm.go:734] duration metric: took 268.7225ms waiting for restarted kubelet to initialise ...
	I0421 20:29:49.811864    7460 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:29:49.811939    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:29:49.811939    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.811939    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.811939    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.821672    7460 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 20:29:49.821672    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.821672    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.821672    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.821672    7460 round_trippers.go:580]     Audit-Id: f3c069e7-3a61-4a63-aee9-6efe4ab11baa
	I0421 20:29:49.821672    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.821672    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.821672    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.823700    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1879"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 87967 chars]
	I0421 20:29:49.828114    7460 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:49.828418    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:49.828418    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.828418    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.828491    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.836244    7460 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:29:49.836244    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.836244    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.836244    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.836244    7460 round_trippers.go:580]     Audit-Id: f6e9de13-c425-44a9-9cc5-e76a736feacc
	I0421 20:29:49.836244    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.836244    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.836244    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.836833    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"448","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0421 20:29:49.837861    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:49.837861    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.837861    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.837861    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.851397    7460 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 20:29:49.852274    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.852358    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.852358    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.852358    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.852358    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.852390    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.852390    7460 round_trippers.go:580]     Audit-Id: 4ae82317-ccdb-454e-86fa-153a7e8dea15
	I0421 20:29:49.852390    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:49.852965    7460 pod_ready.go:97] node "multinode-152500" hosting pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.852965    7460 pod_ready.go:81] duration metric: took 24.7918ms for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:49.852965    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.852965    7460 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:49.852965    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-152500
	I0421 20:29:49.852965    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.852965    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.852965    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.869721    7460 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0421 20:29:49.869721    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.869721    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.869721    7460 round_trippers.go:580]     Audit-Id: afa2c9ad-d009-43cc-b361-ae3d66d29801
	I0421 20:29:49.869721    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.870770    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.870805    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.870805    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.871438    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"437e0c4d-b43f-48c8-9fee-93e3e8a81c6d","resourceVersion":"1873","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.197.221:2379","kubernetes.io/config.hash":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.mirror":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.seen":"2024-04-21T20:29:40.589799517Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0421 20:29:49.872514    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:49.872543    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.872543    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.872543    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.880717    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:29:49.880717    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.880717    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.880717    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.880717    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.880717    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.880717    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.880717    7460 round_trippers.go:580]     Audit-Id: 2fe21853-87e3-4030-a406-3338fd290166
	I0421 20:29:49.881400    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:49.881400    7460 pod_ready.go:97] node "multinode-152500" hosting pod "etcd-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.881400    7460 pod_ready.go:81] duration metric: took 28.4349ms for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:49.881400    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "etcd-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.881925    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:49.882114    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-152500
	I0421 20:29:49.882114    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.882114    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.882114    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.890421    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:29:49.890421    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.890421    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.890421    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.890835    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.890835    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.890835    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.890835    7460 round_trippers.go:580]     Audit-Id: 555ff6f9-2002-474e-b9b2-453b4347e81c
	I0421 20:29:49.891193    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-152500","namespace":"kube-system","uid":"6e73294a-2a7d-4f05-beb1-bb011d5f1f52","resourceVersion":"1875","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.197.221:8443","kubernetes.io/config.hash":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.mirror":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.seen":"2024-04-21T20:29:40.518049422Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0421 20:29:49.891391    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:49.891391    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.891391    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.891391    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.895775    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:49.895775    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.895775    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.896162    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.896162    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.896162    7460 round_trippers.go:580]     Audit-Id: 78a747f5-5492-45e9-a80e-5f7bb096d02c
	I0421 20:29:49.896162    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.896162    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.896282    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:49.896282    7460 pod_ready.go:97] node "multinode-152500" hosting pod "kube-apiserver-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.896282    7460 pod_ready.go:81] duration metric: took 14.3566ms for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:49.896282    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "kube-apiserver-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.896282    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:49.896815    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:29:49.896872    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.896872    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.896872    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.900693    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:49.900693    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.900693    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.900693    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.900693    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.900693    7460 round_trippers.go:580]     Audit-Id: 4994d00f-cda1-4eee-8fc8-6e1671fceb8f
	I0421 20:29:49.900693    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.900693    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.901684    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"1868","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0421 20:29:49.901684    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:49.902292    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:49.902292    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:49.902292    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:49.927526    7460 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0421 20:29:49.927948    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:49.927948    7460 round_trippers.go:580]     Audit-Id: 41c81ab6-a543-4abb-9f00-0976b5275192
	I0421 20:29:49.927948    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:49.927948    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:49.927948    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:49.927948    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:49.927948    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:49 GMT
	I0421 20:29:49.930855    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:49.932052    7460 pod_ready.go:97] node "multinode-152500" hosting pod "kube-controller-manager-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.932052    7460 pod_ready.go:81] duration metric: took 35.7693ms for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:49.932105    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "kube-controller-manager-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:49.932144    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:50.000576    7460 request.go:629] Waited for 68.3953ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:29:50.000961    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:29:50.000961    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.000961    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.000961    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.005580    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.005580    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.005580    7460 round_trippers.go:580]     Audit-Id: 8f55617e-fe08-4428-88d8-1d8018df57ec
	I0421 20:29:50.005647    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.005647    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.005647    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.005647    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.005647    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.005770    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9zlm5","generateName":"kube-proxy-","namespace":"kube-system","uid":"61ba111b-28e9-40db-943d-22a595fdc27e","resourceVersion":"1803","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0421 20:29:50.209283    7460 request.go:629] Waited for 202.185ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:29:50.209283    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:29:50.209283    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.209283    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.209559    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.213642    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.213642    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.213642    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.213642    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.213642    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.213642    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.213642    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.213829    7460 round_trippers.go:580]     Audit-Id: 5aa4f8f0-c21e-43f8-8780-9ae607c967c9
	I0421 20:29:50.214016    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"1805","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4486 chars]
	I0421 20:29:50.214682    7460 pod_ready.go:97] node "multinode-152500-m02" hosting pod "kube-proxy-9zlm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m02" has status "Ready":"Unknown"
	I0421 20:29:50.214682    7460 pod_ready.go:81] duration metric: took 282.5354ms for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:50.214682    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m02" hosting pod "kube-proxy-9zlm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m02" has status "Ready":"Unknown"
	I0421 20:29:50.214682    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:50.397995    7460 request.go:629] Waited for 183.0561ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:29:50.398323    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:29:50.398323    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.398323    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.398323    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.403101    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.403101    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.403101    7460 round_trippers.go:580]     Audit-Id: bb2c5247-456d-49a9-954e-e4b3bbfed67b
	I0421 20:29:50.403101    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.403101    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.403101    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.403101    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.403101    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.403101    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kl8t2","generateName":"kube-proxy-","namespace":"kube-system","uid":"4154de9b-3a7c-4ed6-a987-82b6d539dc7e","resourceVersion":"1879","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6230 chars]
	I0421 20:29:50.602260    7460 request.go:629] Waited for 197.574ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:50.602260    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:50.602260    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.602260    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.602260    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.607225    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.607225    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.607225    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.607225    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.607225    7460 round_trippers.go:580]     Audit-Id: de72a3e3-53d4-458c-a438-54cc501af205
	I0421 20:29:50.607225    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.607225    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.607225    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.608097    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:50.609364    7460 pod_ready.go:97] node "multinode-152500" hosting pod "kube-proxy-kl8t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:50.609542    7460 pod_ready.go:81] duration metric: took 394.8579ms for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:50.609542    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "kube-proxy-kl8t2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:50.609611    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:50.807980    7460 request.go:629] Waited for 198.2032ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:29:50.808289    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:29:50.808289    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.808289    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:50.808289    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.813046    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:50.813046    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:50.813116    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:50.813173    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:50.813173    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:50.813173    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:50 GMT
	I0421 20:29:50.813173    7460 round_trippers.go:580]     Audit-Id: c4d12c3b-17c3-4ade-8228-e6e48183fc14
	I0421 20:29:50.813173    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:50.813326    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sp699","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab29a5-b24b-4d2c-a829-fbf2770ef34c","resourceVersion":"1781","creationTimestamp":"2024-04-21T20:13:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:13:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0421 20:29:50.997013    7460 request.go:629] Waited for 182.4308ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:29:50.997121    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:29:50.997121    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:50.997156    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:50.997156    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:51.000797    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:51.001741    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:51.001741    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:51.001741    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:51.001741    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:51.001805    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:51.001805    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:51 GMT
	I0421 20:29:51.001805    7460 round_trippers.go:580]     Audit-Id: b260768d-3785-4708-879e-c65d46b77d0b
	I0421 20:29:51.001946    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m03","uid":"9c2fb882-be16-4c12-815f-4dd3e35c66ee","resourceVersion":"1789","creationTimestamp":"2024-04-21T20:25:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_25_05_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:25:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0421 20:29:51.002046    7460 pod_ready.go:97] node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:29:51.002046    7460 pod_ready.go:81] duration metric: took 392.4325ms for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:51.002046    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:29:51.002046    7460 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:51.201214    7460 request.go:629] Waited for 198.9119ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:29:51.201436    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:29:51.201475    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:51.201475    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:51.201507    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:51.216020    7460 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0421 20:29:51.217074    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:51.217074    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:51.217074    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:51 GMT
	I0421 20:29:51.217074    7460 round_trippers.go:580]     Audit-Id: f5df4fbd-0c2a-4dc5-8595-58dc469dbde6
	I0421 20:29:51.217074    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:51.217074    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:51.217074    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:51.218247    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-152500","namespace":"kube-system","uid":"8178553d-7f1d-423a-89e5-41b226b2bb6d","resourceVersion":"1871","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.mirror":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.seen":"2024-04-21T20:05:53.333724913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0421 20:29:51.405450    7460 request.go:629] Waited for 186.6206ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:51.405998    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:51.405998    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:51.406071    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:51.406071    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:51.411374    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:29:51.411374    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:51.411374    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:51.411374    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:51 GMT
	I0421 20:29:51.411374    7460 round_trippers.go:580]     Audit-Id: 8e442dc2-0ec4-47fa-970c-4dbb614a49a1
	I0421 20:29:51.411374    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:51.411374    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:51.411374    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:51.412047    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:51.412530    7460 pod_ready.go:97] node "multinode-152500" hosting pod "kube-scheduler-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:51.412611    7460 pod_ready.go:81] duration metric: took 410.5611ms for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	E0421 20:29:51.412611    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500" hosting pod "kube-scheduler-multinode-152500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500" has status "Ready":"False"
	I0421 20:29:51.412611    7460 pod_ready.go:38] duration metric: took 1.6006595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:29:51.412611    7460 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0421 20:29:51.437138    7460 command_runner.go:130] > -16
	I0421 20:29:51.437138    7460 ops.go:34] apiserver oom_adj: -16
	I0421 20:29:51.437138    7460 kubeadm.go:591] duration metric: took 13.7170937s to restartPrimaryControlPlane
	I0421 20:29:51.437387    7460 kubeadm.go:393] duration metric: took 13.7931163s to StartCluster
	I0421 20:29:51.437387    7460 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:51.437599    7460 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:29:51.440009    7460 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:29:51.440925    7460 start.go:234] Will wait 6m0s for node &{Name: IP:172.27.197.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0421 20:29:51.440925    7460 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0421 20:29:51.445169    7460 out.go:177] * Verifying Kubernetes components...
	I0421 20:29:51.452565    7460 out.go:177] * Enabled addons: 
	I0421 20:29:51.441755    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:29:51.457049    7460 addons.go:505] duration metric: took 16.0612ms for enable addons: enabled=[]
	I0421 20:29:51.475508    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:29:51.852219    7460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:29:51.887023    7460 node_ready.go:35] waiting up to 6m0s for node "multinode-152500" to be "Ready" ...
	I0421 20:29:51.887194    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:51.887194    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:51.887326    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:51.887326    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:51.894501    7460 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:29:51.894501    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:51.894554    7460 round_trippers.go:580]     Audit-Id: f9619b6d-d7a6-48bd-bed5-0824614a8ff7
	I0421 20:29:51.894554    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:51.894554    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:51.894554    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:51.894554    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:51.894554    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:51 GMT
	I0421 20:29:51.896417    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:52.388332    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:52.388332    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:52.388332    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:52.388332    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:52.392923    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:52.393092    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:52.393092    7460 round_trippers.go:580]     Audit-Id: 0fd1d6a0-7010-4caf-ae3c-5fef1b1708e0
	I0421 20:29:52.393092    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:52.393092    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:52.393092    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:52.393092    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:52.393092    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:52 GMT
	I0421 20:29:52.393402    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:52.887985    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:52.887985    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:52.888119    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:52.888119    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:52.892422    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:52.892877    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:52.892877    7460 round_trippers.go:580]     Audit-Id: 5d355065-e438-4fa1-bb10-9f228b65cf54
	I0421 20:29:52.892877    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:52.892877    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:52.892877    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:52.892877    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:52.892877    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:52 GMT
	I0421 20:29:52.893073    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:53.387940    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:53.388175    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:53.388175    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:53.388175    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:53.396655    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:29:53.397633    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:53.397659    7460 round_trippers.go:580]     Audit-Id: f1467efe-65a1-4a5a-b4ee-7230df6307dd
	I0421 20:29:53.397659    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:53.397659    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:53.397770    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:53.397770    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:53.397770    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:53 GMT
	I0421 20:29:53.397770    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:53.890988    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:53.891077    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:53.891077    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:53.891141    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:53.895066    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:53.895066    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:53.895066    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:53.895066    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:53.895066    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:53.895066    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:53.895066    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:53 GMT
	I0421 20:29:53.895685    7460 round_trippers.go:580]     Audit-Id: 0951338f-bd6c-4ad0-ac05-da9dfac3427a
	I0421 20:29:53.896042    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:53.896794    7460 node_ready.go:53] node "multinode-152500" has status "Ready":"False"
	I0421 20:29:54.389759    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:54.389759    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:54.389759    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:54.389862    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:54.396683    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:29:54.396683    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:54.396683    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:54.396683    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:54.396683    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:54 GMT
	I0421 20:29:54.396683    7460 round_trippers.go:580]     Audit-Id: 44f21199-f012-4683-8bc3-c6108c3dde16
	I0421 20:29:54.396683    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:54.396683    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:54.397330    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:54.889672    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:54.889735    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:54.889735    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:54.889735    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:54.893578    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:54.893578    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:54.893578    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:54.893578    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:54 GMT
	I0421 20:29:54.894165    7460 round_trippers.go:580]     Audit-Id: bb10f650-acb5-472b-9bdb-5992b986ce08
	I0421 20:29:54.894165    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:54.894165    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:54.894165    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:54.894626    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:55.388441    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:55.388441    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.388441    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.388441    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.393429    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:55.393429    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.393429    7460 round_trippers.go:580]     Audit-Id: 49464435-f019-4c0c-964b-3c1649f07f43
	I0421 20:29:55.393429    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.393429    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.393429    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.393429    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.393943    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.394473    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1820","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0421 20:29:55.888045    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:55.888252    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.888252    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.888252    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.892095    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:55.892262    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.892262    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.892262    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.892262    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.892262    7460 round_trippers.go:580]     Audit-Id: c8b05f98-b542-4b08-9dd9-8cd266749d28
	I0421 20:29:55.892262    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.892262    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.892355    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:55.893000    7460 node_ready.go:49] node "multinode-152500" has status "Ready":"True"
	I0421 20:29:55.893000    7460 node_ready.go:38] duration metric: took 4.0058955s for node "multinode-152500" to be "Ready" ...
	I0421 20:29:55.893139    7460 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:29:55.893139    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:29:55.893139    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.893139    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.893139    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.898737    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:29:55.898737    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.898737    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.898737    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.899244    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.899244    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.899244    7460 round_trippers.go:580]     Audit-Id: 5fd6fa6b-8bda-44a9-9688-b2291dc1c8aa
	I0421 20:29:55.899244    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.901787    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1908"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87393 chars]
	I0421 20:29:55.908545    7460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:55.910083    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:55.910083    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.910083    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.910083    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.914987    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:55.914987    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.914987    7460 round_trippers.go:580]     Audit-Id: 5cd9a830-e0ad-40a8-8705-137815d2acff
	I0421 20:29:55.914987    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.914987    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.914987    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.915583    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.915583    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.915674    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:55.916585    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:55.916661    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:55.916661    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:55.916661    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:55.920075    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:55.920075    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:55.920453    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:55.920453    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:55.920453    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:55 GMT
	I0421 20:29:55.920453    7460 round_trippers.go:580]     Audit-Id: f0a2be9b-d555-4552-8b58-aec8f64fbb34
	I0421 20:29:55.920453    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:55.920453    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:55.920590    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:56.419369    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:56.419369    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:56.419369    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:56.419610    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:56.424605    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:56.424647    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:56.424686    7460 round_trippers.go:580]     Audit-Id: 6d6511eb-4bb4-44b6-8f22-0e4f6e8e781f
	I0421 20:29:56.424708    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:56.424708    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:56.424708    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:56.424708    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:56.424708    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:56 GMT
	I0421 20:29:56.426250    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:56.427107    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:56.427107    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:56.427107    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:56.427107    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:56.431776    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:56.431776    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:56.431776    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:56 GMT
	I0421 20:29:56.432310    7460 round_trippers.go:580]     Audit-Id: 95a7cfcd-e0e8-4b63-ab9a-a0585da570e9
	I0421 20:29:56.432310    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:56.432310    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:56.432310    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:56.432310    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:56.432508    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:56.923231    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:56.923511    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:56.923511    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:56.923511    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:56.927760    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:56.928424    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:56.928424    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:56.928424    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:56.928424    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:56 GMT
	I0421 20:29:56.928424    7460 round_trippers.go:580]     Audit-Id: 69e230fc-2c81-49eb-9506-50c74745b11f
	I0421 20:29:56.928424    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:56.928424    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:56.928701    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:56.929569    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:56.929675    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:56.929675    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:56.929675    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:56.932846    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:56.933155    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:56.933155    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:56.933155    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:56 GMT
	I0421 20:29:56.933155    7460 round_trippers.go:580]     Audit-Id: 9dabddad-4098-471b-8a65-3eb7c2628044
	I0421 20:29:56.933155    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:56.933155    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:56.933155    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:56.933548    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:57.410668    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:57.410827    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:57.410827    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:57.410827    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:57.415741    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:57.415741    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:57.415741    7460 round_trippers.go:580]     Audit-Id: d65cb4a7-41ae-425b-a06e-30d40115acbe
	I0421 20:29:57.415741    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:57.415741    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:57.415741    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:57.415741    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:57.415741    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:57 GMT
	I0421 20:29:57.416452    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:57.417103    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:57.417103    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:57.417103    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:57.417103    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:57.421305    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:57.421305    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:57.421305    7460 round_trippers.go:580]     Audit-Id: 47e8858b-d1b1-4bbd-b696-130bd36563cc
	I0421 20:29:57.421305    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:57.421305    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:57.421305    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:57.421305    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:57.421305    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:57 GMT
	I0421 20:29:57.421305    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:57.911785    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:57.911864    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:57.911864    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:57.911898    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:57.915251    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:57.915251    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:57.915796    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:57.915796    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:57 GMT
	I0421 20:29:57.915796    7460 round_trippers.go:580]     Audit-Id: 9836b6cd-9949-40df-90e6-10af4eb294be
	I0421 20:29:57.915796    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:57.915796    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:57.915937    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:57.916194    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:57.917526    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:57.917526    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:57.917605    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:57.917605    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:57.920025    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:29:57.920458    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:57.920505    7460 round_trippers.go:580]     Audit-Id: d120ae42-8f61-49ce-b761-b84228a399d9
	I0421 20:29:57.920505    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:57.920505    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:57.920505    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:57.920505    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:57.920505    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:57 GMT
	I0421 20:29:57.920939    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:57.921559    7460 pod_ready.go:102] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"False"
	I0421 20:29:58.423700    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:58.423700    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:58.423700    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:58.423700    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:58.427322    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:58.427778    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:58.427778    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:58.427843    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:58 GMT
	I0421 20:29:58.427843    7460 round_trippers.go:580]     Audit-Id: e900778c-9cb9-46ed-aa34-0dc4ca9825d2
	I0421 20:29:58.427843    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:58.427843    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:58.427843    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:58.428142    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1880","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0421 20:29:58.429046    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:58.429100    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:58.429100    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:58.429134    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:58.431428    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:29:58.431428    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:58.431428    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:58.432389    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:58.432389    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:58.432389    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:58 GMT
	I0421 20:29:58.432389    7460 round_trippers.go:580]     Audit-Id: 4b5fda83-9160-4aad-8630-ccc47bb05c32
	I0421 20:29:58.432389    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:58.432468    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:58.911977    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:58.912010    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:58.912082    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:58.912082    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:58.920596    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:29:58.920632    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:58.920632    7460 round_trippers.go:580]     Audit-Id: 501ccf0f-6126-4c71-896a-b0fe826bf161
	I0421 20:29:58.920691    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:58.920691    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:58.920691    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:58.920691    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:58.920691    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:58 GMT
	I0421 20:29:58.920691    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1925","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0421 20:29:58.921845    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:58.921877    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:58.921877    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:58.921949    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:58.926276    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:58.926276    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:58.926276    7460 round_trippers.go:580]     Audit-Id: f9c461f6-f77e-44d8-bd84-f33781ec9cc9
	I0421 20:29:58.926276    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:58.926276    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:58.926276    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:58.926276    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:58.926276    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:58 GMT
	I0421 20:29:58.926276    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.414648    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:59.414774    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.414838    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.414838    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.419200    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:59.419959    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.419959    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.419959    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.419959    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.419959    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.419959    7460 round_trippers.go:580]     Audit-Id: 6f54c1e9-1acc-4409-a416-2afdf3a0c805
	I0421 20:29:59.419959    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.420285    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1925","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0421 20:29:59.420890    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.420890    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.421107    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.421107    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.426141    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:29:59.426141    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.426141    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.426141    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.426141    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.426141    7460 round_trippers.go:580]     Audit-Id: 070bf4bd-7b68-4c3e-b3a2-2b6b3cd74eac
	I0421 20:29:59.426141    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.426141    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.427553    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.919894    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:29:59.919969    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.919969    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.919969    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.923306    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:59.923914    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.923914    7460 round_trippers.go:580]     Audit-Id: 1e3183f8-dfdc-4a75-842e-294e4824144e
	I0421 20:29:59.923914    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.923914    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.923914    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.923914    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.923914    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.925297    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0421 20:29:59.926246    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.926246    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.926299    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.926299    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.940135    7460 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0421 20:29:59.940135    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.940135    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.940135    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.940135    7460 round_trippers.go:580]     Audit-Id: d9ac6475-6caa-4df3-8db8-9878e650f378
	I0421 20:29:59.940135    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.940135    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.940135    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.941144    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.942893    7460 pod_ready.go:92] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:29:59.942893    7460 pod_ready.go:81] duration metric: took 4.0329416s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.942893    7460 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.942893    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-152500
	I0421 20:29:59.942893    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.942893    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.942893    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.952679    7460 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0421 20:29:59.952679    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.952679    7460 round_trippers.go:580]     Audit-Id: fa81c032-1600-4fe7-a5b6-f7cf9bb44185
	I0421 20:29:59.952679    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.952679    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.953057    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.953057    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.953057    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.954048    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"437e0c4d-b43f-48c8-9fee-93e3e8a81c6d","resourceVersion":"1914","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.197.221:2379","kubernetes.io/config.hash":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.mirror":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.seen":"2024-04-21T20:29:40.589799517Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0421 20:29:59.954750    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.954791    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.954832    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.954832    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.961777    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:29:59.961777    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.961777    7460 round_trippers.go:580]     Audit-Id: 2259ae4b-8980-4078-ace3-0167a6cdbcf2
	I0421 20:29:59.961777    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.961777    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.961777    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.961777    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.961777    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.961777    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.962819    7460 pod_ready.go:92] pod "etcd-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:29:59.962819    7460 pod_ready.go:81] duration metric: took 19.9256ms for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.962819    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.962819    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-152500
	I0421 20:29:59.962819    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.962819    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.962819    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.966468    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:29:59.966468    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.966468    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.966468    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.966468    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.966468    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.966468    7460 round_trippers.go:580]     Audit-Id: b5250bfa-0053-4caa-9ce0-6d0f840536cb
	I0421 20:29:59.966468    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.966468    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-152500","namespace":"kube-system","uid":"6e73294a-2a7d-4f05-beb1-bb011d5f1f52","resourceVersion":"1911","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.197.221:8443","kubernetes.io/config.hash":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.mirror":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.seen":"2024-04-21T20:29:40.518049422Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0421 20:29:59.966468    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.966468    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.966468    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.966468    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.970715    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:29:59.970715    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.971468    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.971468    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.971468    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.971468    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.971468    7460 round_trippers.go:580]     Audit-Id: 26ffaf05-56b6-4078-88ad-d45404ef5e71
	I0421 20:29:59.971520    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.971677    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:29:59.971677    7460 pod_ready.go:92] pod "kube-apiserver-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:29:59.971677    7460 pod_ready.go:81] duration metric: took 8.858ms for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.971677    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:29:59.971677    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:29:59.971677    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.971677    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.971677    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.974573    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:29:59.974573    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.974573    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.974573    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.974573    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.974573    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.974573    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.975169    7460 round_trippers.go:580]     Audit-Id: 399d1462-2ec5-414e-be67-78f6c4f915a3
	I0421 20:29:59.975206    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"1868","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0421 20:29:59.976293    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:29:59.976293    7460 round_trippers.go:469] Request Headers:
	I0421 20:29:59.976345    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:29:59.976345    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:29:59.979255    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:29:59.979364    7460 round_trippers.go:577] Response Headers:
	I0421 20:29:59.979364    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:29:59.979364    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:29:59.979364    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:29:59 GMT
	I0421 20:29:59.979364    7460 round_trippers.go:580]     Audit-Id: 9a2a1722-a474-4b7f-a07d-b6eaf1292ea4
	I0421 20:29:59.979364    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:29:59.979364    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:29:59.979610    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:30:00.486049    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:30:00.486122    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.486122    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.486186    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.496575    7460 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 20:30:00.497315    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.497315    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.497387    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.497387    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.497387    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.497387    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.497387    7460 round_trippers.go:580]     Audit-Id: 884c4f84-eff9-4a83-adae-794d67ac84db
	I0421 20:30:00.497935    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"1946","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0421 20:30:00.498820    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:00.498881    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.498881    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.498881    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.501210    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:30:00.501210    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.501210    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.501210    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.501210    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.501210    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.501210    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.501210    7460 round_trippers.go:580]     Audit-Id: 3cc5b87a-364d-4980-8787-dc0fd20a4c39
	I0421 20:30:00.502189    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:30:00.502189    7460 pod_ready.go:92] pod "kube-controller-manager-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:30:00.502189    7460 pod_ready.go:81] duration metric: took 530.508ms for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.502189    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.502189    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:30:00.502189    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.503019    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.503019    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.507417    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:00.507417    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.507417    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.507417    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.507417    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.508411    7460 round_trippers.go:580]     Audit-Id: 312e496c-a156-4867-b951-d30bf7195762
	I0421 20:30:00.508411    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.508411    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.508411    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9zlm5","generateName":"kube-proxy-","namespace":"kube-system","uid":"61ba111b-28e9-40db-943d-22a595fdc27e","resourceVersion":"1803","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0421 20:30:00.509309    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:30:00.509336    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.509336    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.509336    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.511786    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:30:00.511786    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.511786    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.511786    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.511786    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.511786    7460 round_trippers.go:580]     Audit-Id: 4d775e2b-9978-40ab-a90d-c2c6d12d839d
	I0421 20:30:00.511786    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.511786    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.511786    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799","resourceVersion":"1934","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_09_11_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0421 20:30:00.511786    7460 pod_ready.go:97] node "multinode-152500-m02" hosting pod "kube-proxy-9zlm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m02" has status "Ready":"Unknown"
	I0421 20:30:00.513117    7460 pod_ready.go:81] duration metric: took 10.9283ms for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	E0421 20:30:00.513117    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m02" hosting pod "kube-proxy-9zlm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m02" has status "Ready":"Unknown"
	I0421 20:30:00.513117    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.533295    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:30:00.533295    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.533295    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.533295    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.536702    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:30:00.536702    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.536702    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.536702    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.536702    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.536702    7460 round_trippers.go:580]     Audit-Id: fdf3694e-b427-4029-8cf2-323b3e567205
	I0421 20:30:00.536702    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.536702    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.536963    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kl8t2","generateName":"kube-proxy-","namespace":"kube-system","uid":"4154de9b-3a7c-4ed6-a987-82b6d539dc7e","resourceVersion":"1893","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0421 20:30:00.721414    7460 request.go:629] Waited for 183.5314ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:00.721414    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:00.721573    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.721573    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.721573    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.724956    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:30:00.725426    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.725426    7460 round_trippers.go:580]     Audit-Id: f07d4f1e-946c-45a5-bc05-e58a49ac5ef0
	I0421 20:30:00.725426    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.725500    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.725500    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.725534    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.725534    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.725534    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:30:00.726547    7460 pod_ready.go:92] pod "kube-proxy-kl8t2" in "kube-system" namespace has status "Ready":"True"
	I0421 20:30:00.726600    7460 pod_ready.go:81] duration metric: took 213.334ms for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.726633    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:00.925717    7460 request.go:629] Waited for 198.6835ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:30:00.925811    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:30:00.925811    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:00.925811    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:00.925811    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:00.931131    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:30:00.931297    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:00.931297    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:00 GMT
	I0421 20:30:00.931297    7460 round_trippers.go:580]     Audit-Id: 624aff71-0f8c-4452-a719-a06936c18a5d
	I0421 20:30:00.931297    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:00.931297    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:00.931297    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:00.931297    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:00.931297    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sp699","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab29a5-b24b-4d2c-a829-fbf2770ef34c","resourceVersion":"1781","creationTimestamp":"2024-04-21T20:13:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:13:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0421 20:30:01.128974    7460 request.go:629] Waited for 196.4329ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:30:01.129162    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:30:01.129162    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.129162    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.129162    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.133935    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:01.133935    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.133935    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.133935    7460 round_trippers.go:580]     Audit-Id: 85e40b6a-afcd-4b05-8a99-fd29caee1690
	I0421 20:30:01.133935    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.133935    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.133935    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.133935    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.134481    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m03","uid":"9c2fb882-be16-4c12-815f-4dd3e35c66ee","resourceVersion":"1928","creationTimestamp":"2024-04-21T20:25:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_25_05_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:25:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4302 chars]
	I0421 20:30:01.134773    7460 pod_ready.go:97] node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:30:01.134773    7460 pod_ready.go:81] duration metric: took 408.1367ms for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	E0421 20:30:01.134773    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:30:01.134773    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:01.334620    7460 request.go:629] Waited for 199.472ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:30:01.334780    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:30:01.334780    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.334780    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.334780    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.339112    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:01.339182    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.339182    7460 round_trippers.go:580]     Audit-Id: 5691c04e-7bac-4f2b-b0e5-73d123053b7b
	I0421 20:30:01.339182    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.339182    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.339182    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.339182    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.339182    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.339335    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-152500","namespace":"kube-system","uid":"8178553d-7f1d-423a-89e5-41b226b2bb6d","resourceVersion":"1907","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.mirror":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.seen":"2024-04-21T20:05:53.333724913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0421 20:30:01.534635    7460 request.go:629] Waited for 194.4132ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:01.534635    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:30:01.534635    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.534635    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.534635    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.540045    7460 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0421 20:30:01.540045    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.540045    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.540045    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.540045    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.540045    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.540045    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.540045    7460 round_trippers.go:580]     Audit-Id: b20bd934-64fe-4d54-929b-2abcc6fda74a
	I0421 20:30:01.540045    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:30:01.540822    7460 pod_ready.go:92] pod "kube-scheduler-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:30:01.540822    7460 pod_ready.go:81] duration metric: took 406.0457ms for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:30:01.540923    7460 pod_ready.go:38] duration metric: took 5.6477431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:30:01.540923    7460 api_server.go:52] waiting for apiserver process to appear ...
	I0421 20:30:01.556255    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:30:01.587708    7460 command_runner.go:130] > 1865
	I0421 20:30:01.588273    7460 api_server.go:72] duration metric: took 10.1472738s to wait for apiserver process to appear ...
	I0421 20:30:01.588273    7460 api_server.go:88] waiting for apiserver healthz status ...
	I0421 20:30:01.588397    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:30:01.598545    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 200:
	ok
	I0421 20:30:01.598545    7460 round_trippers.go:463] GET https://172.27.197.221:8443/version
	I0421 20:30:01.598545    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.598545    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.598545    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.600521    7460 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0421 20:30:01.601512    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.601512    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Content-Length: 263
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Audit-Id: 3a161c80-1cdc-4f35-a9e7-afe66581b79b
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.601512    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.601512    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.602531    7460 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0421 20:30:01.602531    7460 api_server.go:141] control plane version: v1.30.0
	I0421 20:30:01.602531    7460 api_server.go:131] duration metric: took 14.2587ms to wait for apiserver health ...
	I0421 20:30:01.602531    7460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0421 20:30:01.721488    7460 request.go:629] Waited for 118.8332ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:30:01.721703    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:30:01.721703    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.721703    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.721703    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.732512    7460 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0421 20:30:01.732512    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.732512    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.732512    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.732512    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.732512    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.732512    7460 round_trippers.go:580]     Audit-Id: 358d8317-9060-4f4b-aa51-e99f3ff5e13c
	I0421 20:30:01.732512    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.733803    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0421 20:30:01.738400    7460 system_pods.go:59] 12 kube-system pods found
	I0421 20:30:01.738454    7460 system_pods.go:61] "coredns-7db6d8ff4d-v7pf8" [2973ebed-006d-4495-b1a7-7b4472e46f23] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "etcd-multinode-152500" [437e0c4d-b43f-48c8-9fee-93e3e8a81c6d] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kindnet-kvd8z" [e6d4f203-892a-4a67-a6aa-38161a3749da] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kindnet-rkgsx" [ba1febf0-40e8-4a24-83e0-cbb9f6c01e34] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kindnet-vb8ws" [3e6ed0fc-724b-4a52-9738-2d9ef84b57eb] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kube-apiserver-multinode-152500" [6e73294a-2a7d-4f05-beb1-bb011d5f1f52] Running
	I0421 20:30:01.738486    7460 system_pods.go:61] "kube-controller-manager-multinode-152500" [a3e58103-1fb6-4e2f-aa47-76f3a8cfd758] Running
	I0421 20:30:01.738536    7460 system_pods.go:61] "kube-proxy-9zlm5" [61ba111b-28e9-40db-943d-22a595fdc27e] Running
	I0421 20:30:01.738536    7460 system_pods.go:61] "kube-proxy-kl8t2" [4154de9b-3a7c-4ed6-a987-82b6d539dc7e] Running
	I0421 20:30:01.738536    7460 system_pods.go:61] "kube-proxy-sp699" [8eab29a5-b24b-4d2c-a829-fbf2770ef34c] Running
	I0421 20:30:01.738568    7460 system_pods.go:61] "kube-scheduler-multinode-152500" [8178553d-7f1d-423a-89e5-41b226b2bb6d] Running
	I0421 20:30:01.738568    7460 system_pods.go:61] "storage-provisioner" [2eea731d-6a0b-4404-8518-a088d879b487] Running
	I0421 20:30:01.738568    7460 system_pods.go:74] duration metric: took 136.0352ms to wait for pod list to return data ...
	I0421 20:30:01.738568    7460 default_sa.go:34] waiting for default service account to be created ...
	I0421 20:30:01.923876    7460 request.go:629] Waited for 184.8928ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/default/serviceaccounts
	I0421 20:30:01.924040    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/default/serviceaccounts
	I0421 20:30:01.924040    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:01.924040    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:01.924040    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:01.928539    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:01.928539    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:01.928539    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:01.928539    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Content-Length: 262
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:01 GMT
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Audit-Id: 0447afe9-70ce-4eeb-8c9d-d0f1807ef1cd
	I0421 20:30:01.928539    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:01.928539    7460 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a4620806-bbb0-42e7-af50-a593b05fe653","resourceVersion":"352","creationTimestamp":"2024-04-21T20:06:07Z"}}]}
	I0421 20:30:01.928539    7460 default_sa.go:45] found service account: "default"
	I0421 20:30:01.929102    7460 default_sa.go:55] duration metric: took 189.97ms for default service account to be created ...
	I0421 20:30:01.929102    7460 system_pods.go:116] waiting for k8s-apps to be running ...
	I0421 20:30:02.127021    7460 request.go:629] Waited for 197.5426ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:30:02.127534    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:30:02.127534    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:02.127534    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:02.127534    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:02.134435    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:30:02.135102    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:02.135102    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:02.135102    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:02.135102    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:02.135102    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:02.135102    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:02 GMT
	I0421 20:30:02.135102    7460 round_trippers.go:580]     Audit-Id: 5f368fce-b0f0-4fa1-82af-643d547059f0
	I0421 20:30:02.137051    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0421 20:30:02.141500    7460 system_pods.go:86] 12 kube-system pods found
	I0421 20:30:02.141500    7460 system_pods.go:89] "coredns-7db6d8ff4d-v7pf8" [2973ebed-006d-4495-b1a7-7b4472e46f23] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "etcd-multinode-152500" [437e0c4d-b43f-48c8-9fee-93e3e8a81c6d] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kindnet-kvd8z" [e6d4f203-892a-4a67-a6aa-38161a3749da] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kindnet-rkgsx" [ba1febf0-40e8-4a24-83e0-cbb9f6c01e34] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kindnet-vb8ws" [3e6ed0fc-724b-4a52-9738-2d9ef84b57eb] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-apiserver-multinode-152500" [6e73294a-2a7d-4f05-beb1-bb011d5f1f52] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-controller-manager-multinode-152500" [a3e58103-1fb6-4e2f-aa47-76f3a8cfd758] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-proxy-9zlm5" [61ba111b-28e9-40db-943d-22a595fdc27e] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-proxy-kl8t2" [4154de9b-3a7c-4ed6-a987-82b6d539dc7e] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-proxy-sp699" [8eab29a5-b24b-4d2c-a829-fbf2770ef34c] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "kube-scheduler-multinode-152500" [8178553d-7f1d-423a-89e5-41b226b2bb6d] Running
	I0421 20:30:02.141500    7460 system_pods.go:89] "storage-provisioner" [2eea731d-6a0b-4404-8518-a088d879b487] Running
	I0421 20:30:02.141500    7460 system_pods.go:126] duration metric: took 212.3968ms to wait for k8s-apps to be running ...
	I0421 20:30:02.141500    7460 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:30:02.153438    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:30:02.181415    7460 system_svc.go:56] duration metric: took 39.9139ms WaitForService to wait for kubelet
	I0421 20:30:02.181492    7460 kubeadm.go:576] duration metric: took 10.7404884s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:30:02.181556    7460 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:30:02.328512    7460 request.go:629] Waited for 146.7643ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes
	I0421 20:30:02.328512    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes
	I0421 20:30:02.328512    7460 round_trippers.go:469] Request Headers:
	I0421 20:30:02.328512    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:30:02.328512    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:30:02.333200    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:30:02.333200    7460 round_trippers.go:577] Response Headers:
	I0421 20:30:02.333200    7460 round_trippers.go:580]     Audit-Id: f25be68c-7c94-43f1-bcbc-fd0154528834
	I0421 20:30:02.333681    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:30:02.333681    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:30:02.333681    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:30:02.333681    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:30:02.333681    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:30:02 GMT
	I0421 20:30:02.334251    7460 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1947"},"items":[{"metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16162 chars]
	I0421 20:30:02.335208    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:30:02.335284    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:30:02.335284    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:30:02.335284    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:30:02.335284    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:30:02.335284    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:30:02.335284    7460 node_conditions.go:105] duration metric: took 153.6626ms to run NodePressure ...
	I0421 20:30:02.335376    7460 start.go:240] waiting for startup goroutines ...
	I0421 20:30:02.335376    7460 start.go:245] waiting for cluster config update ...
	I0421 20:30:02.335376    7460 start.go:254] writing updated cluster config ...
	I0421 20:30:02.339633    7460 out.go:177] 
	I0421 20:30:02.342737    7460 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:30:02.353499    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:30:02.353499    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:30:02.359896    7460 out.go:177] * Starting "multinode-152500-m02" worker node in "multinode-152500" cluster
	I0421 20:30:02.364793    7460 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:30:02.364793    7460 cache.go:56] Caching tarball of preloaded images
	I0421 20:30:02.364793    7460 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 20:30:02.364793    7460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 20:30:02.365853    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:30:02.368123    7460 start.go:360] acquireMachinesLock for multinode-152500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:30:02.368123    7460 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-152500-m02"
	I0421 20:30:02.368567    7460 start.go:96] Skipping create...Using existing machine configuration
	I0421 20:30:02.368638    7460 fix.go:54] fixHost starting: m02
	I0421 20:30:02.368829    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:04.516359    7460 main.go:141] libmachine: [stdout =====>] : Off
	
	I0421 20:30:04.516359    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:04.517237    7460 fix.go:112] recreateIfNeeded on multinode-152500-m02: state=Stopped err=<nil>
	W0421 20:30:04.517237    7460 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 20:30:04.523574    7460 out.go:177] * Restarting existing hyperv VM for "multinode-152500-m02" ...
	I0421 20:30:04.527195    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-152500-m02
	I0421 20:30:07.645278    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:07.645473    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:07.645473    7460 main.go:141] libmachine: Waiting for host to start...
	I0421 20:30:07.645473    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:09.902187    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:09.902187    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:09.902187    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:12.509352    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:12.509352    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:13.525271    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:15.726921    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:15.727737    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:15.727844    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:18.347235    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:18.347423    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:19.361393    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:21.560375    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:21.561168    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:21.561411    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:24.150225    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:24.150545    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:25.159489    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:27.363744    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:27.364042    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:27.364042    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:29.991670    7460 main.go:141] libmachine: [stdout =====>] : 
	I0421 20:30:29.991816    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:31.005921    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:33.211158    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:33.211674    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:33.211674    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:35.870396    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:35.870396    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:35.872597    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:38.062427    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:38.062427    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:38.062427    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:40.677534    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:40.678188    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:40.678351    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:30:40.681219    7460 machine.go:94] provisionDockerMachine start ...
	I0421 20:30:40.681219    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:42.896316    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:42.896316    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:42.896400    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:45.550779    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:45.550779    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:45.558754    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:30:45.559466    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:30:45.559514    7460 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 20:30:45.697123    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 20:30:45.697200    7460 buildroot.go:166] provisioning hostname "multinode-152500-m02"
	I0421 20:30:45.697257    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:47.903066    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:47.903658    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:47.903748    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:50.547784    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:50.547983    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:50.554605    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:30:50.555185    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:30:50.555185    7460 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-152500-m02 && echo "multinode-152500-m02" | sudo tee /etc/hostname
	I0421 20:30:50.732960    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-152500-m02
	
	I0421 20:30:50.733089    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:52.901289    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:52.901289    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:52.901989    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:30:55.564355    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:30:55.564355    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:55.569819    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:30:55.570445    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:30:55.570445    7460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-152500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-152500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-152500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 20:30:55.733655    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 20:30:55.733655    7460 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 20:30:55.733655    7460 buildroot.go:174] setting up certificates
	I0421 20:30:55.733655    7460 provision.go:84] configureAuth start
	I0421 20:30:55.735113    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:30:57.907152    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:30:57.907691    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:30:57.907691    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:00.530844    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:00.530844    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:00.530844    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:02.677410    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:02.677410    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:02.678014    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:05.321657    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:05.322561    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:05.322561    7460 provision.go:143] copyHostCerts
	I0421 20:31:05.322779    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0421 20:31:05.323109    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 20:31:05.323203    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 20:31:05.323797    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 20:31:05.324971    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0421 20:31:05.325350    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 20:31:05.325350    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 20:31:05.325799    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 20:31:05.326885    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0421 20:31:05.327196    7460 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 20:31:05.327196    7460 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 20:31:05.327196    7460 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 20:31:05.328591    7460 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-152500-m02 san=[127.0.0.1 172.27.194.200 localhost minikube multinode-152500-m02]
	I0421 20:31:05.495601    7460 provision.go:177] copyRemoteCerts
	I0421 20:31:05.509273    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 20:31:05.509350    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:07.657926    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:07.657926    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:07.658882    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:10.305774    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:10.305774    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:10.307229    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:31:10.415707    7460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9062924s)
	I0421 20:31:10.415707    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0421 20:31:10.415973    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 20:31:10.473145    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0421 20:31:10.474543    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0421 20:31:10.527399    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0421 20:31:10.527399    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 20:31:10.577892    7460 provision.go:87] duration metric: took 14.8441292s to configureAuth
	I0421 20:31:10.577892    7460 buildroot.go:189] setting minikube options for container-runtime
	I0421 20:31:10.578636    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:31:10.578636    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:12.730974    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:12.730974    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:12.730974    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:15.336331    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:15.336331    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:15.343450    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:15.344202    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:15.344202    7460 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 20:31:15.487072    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 20:31:15.487072    7460 buildroot.go:70] root file system type: tmpfs
	I0421 20:31:15.487072    7460 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 20:31:15.487072    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:17.637846    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:17.637846    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:17.637846    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:20.299560    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:20.299560    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:20.307370    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:20.307370    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:20.307370    7460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.197.221"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 20:31:20.482126    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.197.221
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 20:31:20.482323    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:22.593553    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:22.593553    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:22.594543    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:25.204431    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:25.205252    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:25.212923    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:25.212923    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:25.212923    7460 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 20:31:27.701078    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0421 20:31:27.701265    7460 machine.go:97] duration metric: took 47.0196483s to provisionDockerMachine
	I0421 20:31:27.701265    7460 start.go:293] postStartSetup for "multinode-152500-m02" (driver="hyperv")
	I0421 20:31:27.701265    7460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 20:31:27.716770    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 20:31:27.716770    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:29.895834    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:29.895834    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:29.896927    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:32.588837    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:32.588837    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:32.589691    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:31:32.706299    7460 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9889303s)
	I0421 20:31:32.720200    7460 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 20:31:32.727323    7460 command_runner.go:130] > NAME=Buildroot
	I0421 20:31:32.727323    7460 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0421 20:31:32.727323    7460 command_runner.go:130] > ID=buildroot
	I0421 20:31:32.727323    7460 command_runner.go:130] > VERSION_ID=2023.02.9
	I0421 20:31:32.727323    7460 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0421 20:31:32.727419    7460 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 20:31:32.727419    7460 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 20:31:32.727419    7460 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 20:31:32.728688    7460 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 20:31:32.728762    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /etc/ssl/certs/138002.pem
	I0421 20:31:32.742013    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 20:31:32.764498    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 20:31:32.815903    7460 start.go:296] duration metric: took 5.1146009s for postStartSetup
	I0421 20:31:32.815903    7460 fix.go:56] duration metric: took 1m30.4466047s for fixHost
	I0421 20:31:32.815903    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:35.009500    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:35.009500    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:35.010305    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:37.657803    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:37.657803    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:37.667216    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:37.667815    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:37.668073    7460 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0421 20:31:37.800632    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713731497.800005993
	
	I0421 20:31:37.800632    7460 fix.go:216] guest clock: 1713731497.800005993
	I0421 20:31:37.800632    7460 fix.go:229] Guest: 2024-04-21 20:31:37.800005993 +0000 UTC Remote: 2024-04-21 20:31:32.8159035 +0000 UTC m=+242.161584501 (delta=4.984102493s)
	I0421 20:31:37.800632    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:39.994754    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:39.994754    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:39.994754    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:42.682915    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:42.683122    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:42.688666    7460 main.go:141] libmachine: Using SSH client type: native
	I0421 20:31:42.688870    7460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.194.200 22 <nil> <nil>}
	I0421 20:31:42.688870    7460 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713731497
	I0421 20:31:42.843557    7460 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 20:31:37 UTC 2024
	
	I0421 20:31:42.843658    7460 fix.go:236] clock set: Sun Apr 21 20:31:37 UTC 2024
	 (err=<nil>)
	I0421 20:31:42.843658    7460 start.go:83] releasing machines lock for "multinode-152500-m02", held for 1m40.4748006s
	I0421 20:31:42.843845    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:45.037154    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:45.037154    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:45.037946    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:47.685142    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:47.685142    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:47.688485    7460 out.go:177] * Found network options:
	I0421 20:31:47.693437    7460 out.go:177]   - NO_PROXY=172.27.197.221
	W0421 20:31:47.695723    7460 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 20:31:47.699239    7460 out.go:177]   - NO_PROXY=172.27.197.221
	W0421 20:31:47.701284    7460 proxy.go:119] fail to check proxy env: Error ip not in block
	W0421 20:31:47.703274    7460 proxy.go:119] fail to check proxy env: Error ip not in block
	I0421 20:31:47.706338    7460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 20:31:47.706338    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:47.718339    7460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0421 20:31:47.718339    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:31:49.935500    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:49.936473    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:49.936942    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:49.938861    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:31:49.938861    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:49.939040    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:31:52.648166    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:52.648166    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:52.648166    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:31:52.682670    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:31:52.682670    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:31:52.683163    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:31:52.807281    7460 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0421 20:31:52.808231    7460 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0421 20:31:52.808231    7460 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1018551s)
	I0421 20:31:52.808231    7460 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0898545s)
	W0421 20:31:52.808385    7460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 20:31:52.823383    7460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0421 20:31:52.858965    7460 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0421 20:31:52.859038    7460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 20:31:52.859038    7460 start.go:494] detecting cgroup driver to use...
	I0421 20:31:52.859300    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:31:52.903577    7460 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0421 20:31:52.919617    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 20:31:52.959526    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 20:31:52.983458    7460 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 20:31:52.997462    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 20:31:53.033520    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:31:53.070130    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 20:31:53.109252    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 20:31:53.147383    7460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 20:31:53.185939    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 20:31:53.221064    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 20:31:53.256700    7460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 20:31:53.294069    7460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 20:31:53.315538    7460 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0421 20:31:53.331140    7460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 20:31:53.369621    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:31:53.622697    7460 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 20:31:53.661857    7460 start.go:494] detecting cgroup driver to use...
	I0421 20:31:53.678922    7460 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 20:31:53.707747    7460 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0421 20:31:53.707747    7460 command_runner.go:130] > [Unit]
	I0421 20:31:53.707747    7460 command_runner.go:130] > Description=Docker Application Container Engine
	I0421 20:31:53.707747    7460 command_runner.go:130] > Documentation=https://docs.docker.com
	I0421 20:31:53.707747    7460 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0421 20:31:53.707747    7460 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0421 20:31:53.707747    7460 command_runner.go:130] > StartLimitBurst=3
	I0421 20:31:53.707747    7460 command_runner.go:130] > StartLimitIntervalSec=60
	I0421 20:31:53.707747    7460 command_runner.go:130] > [Service]
	I0421 20:31:53.707747    7460 command_runner.go:130] > Type=notify
	I0421 20:31:53.707747    7460 command_runner.go:130] > Restart=on-failure
	I0421 20:31:53.707747    7460 command_runner.go:130] > Environment=NO_PROXY=172.27.197.221
	I0421 20:31:53.707747    7460 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0421 20:31:53.707747    7460 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0421 20:31:53.707747    7460 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0421 20:31:53.707747    7460 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0421 20:31:53.707747    7460 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0421 20:31:53.707747    7460 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0421 20:31:53.707747    7460 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0421 20:31:53.707747    7460 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0421 20:31:53.707747    7460 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0421 20:31:53.707747    7460 command_runner.go:130] > ExecStart=
	I0421 20:31:53.707747    7460 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0421 20:31:53.708286    7460 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0421 20:31:53.708286    7460 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0421 20:31:53.708286    7460 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0421 20:31:53.708286    7460 command_runner.go:130] > LimitNOFILE=infinity
	I0421 20:31:53.708286    7460 command_runner.go:130] > LimitNPROC=infinity
	I0421 20:31:53.708286    7460 command_runner.go:130] > LimitCORE=infinity
	I0421 20:31:53.708286    7460 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0421 20:31:53.708384    7460 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0421 20:31:53.708384    7460 command_runner.go:130] > TasksMax=infinity
	I0421 20:31:53.708384    7460 command_runner.go:130] > TimeoutStartSec=0
	I0421 20:31:53.708429    7460 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0421 20:31:53.708429    7460 command_runner.go:130] > Delegate=yes
	I0421 20:31:53.708429    7460 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0421 20:31:53.708429    7460 command_runner.go:130] > KillMode=process
	I0421 20:31:53.708429    7460 command_runner.go:130] > [Install]
	I0421 20:31:53.708429    7460 command_runner.go:130] > WantedBy=multi-user.target
	I0421 20:31:53.722677    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:31:53.769069    7460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 20:31:53.823796    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 20:31:53.869309    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:31:53.910260    7460 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0421 20:31:53.983192    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 20:31:54.011767    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 20:31:54.056488    7460 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0421 20:31:54.070351    7460 ssh_runner.go:195] Run: which cri-dockerd
	I0421 20:31:54.078654    7460 command_runner.go:130] > /usr/bin/cri-dockerd
	I0421 20:31:54.092303    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 20:31:54.113929    7460 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 20:31:54.167565    7460 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 20:31:54.405127    7460 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 20:31:54.618102    7460 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 20:31:54.618102    7460 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 20:31:54.672592    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:31:54.897857    7460 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 20:31:57.623997    7460 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7261196s)
	I0421 20:31:57.639941    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0421 20:31:57.684566    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:31:57.724044    7460 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0421 20:31:57.952205    7460 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0421 20:31:58.180378    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:31:58.411080    7460 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0421 20:31:58.458273    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0421 20:31:58.502698    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:31:58.725930    7460 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0421 20:31:58.858306    7460 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0421 20:31:58.873723    7460 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0421 20:31:58.883479    7460 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0421 20:31:58.883479    7460 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0421 20:31:58.883479    7460 command_runner.go:130] > Device: 0,22	Inode: 852         Links: 1
	I0421 20:31:58.883479    7460 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0421 20:31:58.883479    7460 command_runner.go:130] > Access: 2024-04-21 20:31:58.774879187 +0000
	I0421 20:31:58.883737    7460 command_runner.go:130] > Modify: 2024-04-21 20:31:58.774879187 +0000
	I0421 20:31:58.883737    7460 command_runner.go:130] > Change: 2024-04-21 20:31:58.779879430 +0000
	I0421 20:31:58.883737    7460 command_runner.go:130] >  Birth: -
	I0421 20:31:58.883919    7460 start.go:562] Will wait 60s for crictl version
	I0421 20:31:58.898193    7460 ssh_runner.go:195] Run: which crictl
	I0421 20:31:58.905744    7460 command_runner.go:130] > /usr/bin/crictl
	I0421 20:31:58.919379    7460 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0421 20:31:58.987734    7460 command_runner.go:130] > Version:  0.1.0
	I0421 20:31:58.987797    7460 command_runner.go:130] > RuntimeName:  docker
	I0421 20:31:58.987797    7460 command_runner.go:130] > RuntimeVersion:  26.0.1
	I0421 20:31:58.987797    7460 command_runner.go:130] > RuntimeApiVersion:  v1
	I0421 20:31:58.987866    7460 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0421 20:31:58.998748    7460 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:31:59.037006    7460 command_runner.go:130] > 26.0.1
	I0421 20:31:59.050016    7460 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0421 20:31:59.092743    7460 command_runner.go:130] > 26.0.1
	I0421 20:31:59.098811    7460 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.1 ...
	I0421 20:31:59.101257    7460 out.go:177]   - env NO_PROXY=172.27.197.221
	I0421 20:31:59.103793    7460 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0421 20:31:59.108758    7460 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0421 20:31:59.108758    7460 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0421 20:31:59.108758    7460 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0421 20:31:59.108758    7460 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:89:59 Flags:up|broadcast|multicast|running}
	I0421 20:31:59.111444    7460 ip.go:210] interface addr: fe80::b6bf:ec1:9a9a:2297/64
	I0421 20:31:59.111444    7460 ip.go:210] interface addr: 172.27.192.1/20
	I0421 20:31:59.127165    7460 ssh_runner.go:195] Run: grep 172.27.192.1	host.minikube.internal$ /etc/hosts
	I0421 20:31:59.135640    7460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:31:59.161393    7460 mustload.go:65] Loading cluster: multinode-152500
	I0421 20:31:59.162861    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:31:59.163365    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:32:01.356507    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:01.356507    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:01.357367    7460 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:32:01.357815    7460 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500 for IP: 172.27.194.200
	I0421 20:32:01.357815    7460 certs.go:194] generating shared ca certs ...
	I0421 20:32:01.357815    7460 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 20:32:01.358793    7460 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0421 20:32:01.359125    7460 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0421 20:32:01.359272    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0421 20:32:01.359557    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0421 20:32:01.359733    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0421 20:32:01.360031    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0421 20:32:01.360667    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem (1338 bytes)
	W0421 20:32:01.361071    7460 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800_empty.pem, impossibly tiny 0 bytes
	I0421 20:32:01.361071    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0421 20:32:01.361071    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0421 20:32:01.361754    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0421 20:32:01.362127    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0421 20:32:01.362687    7460 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem (1708 bytes)
	I0421 20:32:01.363056    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:01.363260    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem -> /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.363260    7460 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.363260    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0421 20:32:01.416246    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0421 20:32:01.475087    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0421 20:32:01.530138    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0421 20:32:01.588986    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0421 20:32:01.641046    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\13800.pem --> /usr/share/ca-certificates/13800.pem (1338 bytes)
	I0421 20:32:01.695951    7460 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /usr/share/ca-certificates/138002.pem (1708 bytes)
	I0421 20:32:01.764305    7460 ssh_runner.go:195] Run: openssl version
	I0421 20:32:01.773655    7460 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0421 20:32:01.788570    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13800.pem && ln -fs /usr/share/ca-certificates/13800.pem /etc/ssl/certs/13800.pem"
	I0421 20:32:01.821651    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.830863    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.830929    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 21 18:41 /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.844441    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13800.pem
	I0421 20:32:01.855992    7460 command_runner.go:130] > 51391683
	I0421 20:32:01.873022    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13800.pem /etc/ssl/certs/51391683.0"
	I0421 20:32:01.909882    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/138002.pem && ln -fs /usr/share/ca-certificates/138002.pem /etc/ssl/certs/138002.pem"
	I0421 20:32:01.946597    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.954505    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.954943    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 21 18:41 /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.967755    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/138002.pem
	I0421 20:32:01.977761    7460 command_runner.go:130] > 3ec20f2e
	I0421 20:32:01.988853    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/138002.pem /etc/ssl/certs/3ec20f2e.0"
	I0421 20:32:02.033215    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0421 20:32:02.070354    7460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:02.078501    7460 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:02.078596    7460 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 21 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:02.094771    7460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0421 20:32:02.105594    7460 command_runner.go:130] > b5213941
	I0421 20:32:02.120128    7460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0421 20:32:02.158779    7460 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0421 20:32:02.165926    7460 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:32:02.166840    7460 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0421 20:32:02.167068    7460 kubeadm.go:928] updating node {m02 172.27.194.200 8443 v1.30.0 docker false true} ...
	I0421 20:32:02.167068    7460 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-152500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.194.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-152500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0421 20:32:02.182075    7460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0421 20:32:02.206399    7460 command_runner.go:130] > kubeadm
	I0421 20:32:02.206509    7460 command_runner.go:130] > kubectl
	I0421 20:32:02.206509    7460 command_runner.go:130] > kubelet
	I0421 20:32:02.206509    7460 binaries.go:44] Found k8s binaries, skipping transfer
	I0421 20:32:02.220583    7460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0421 20:32:02.240887    7460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0421 20:32:02.274739    7460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0421 20:32:02.329059    7460 ssh_runner.go:195] Run: grep 172.27.197.221	control-plane.minikube.internal$ /etc/hosts
	I0421 20:32:02.337172    7460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.197.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0421 20:32:02.380653    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:32:02.604153    7460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:32:02.641945    7460 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:32:02.643166    7460 start.go:316] joinCluster: &{Name:multinode-152500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-152500
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.197.221 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.193.99 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false
istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 20:32:02.643367    7460 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:32:02.643433    7460 host.go:66] Checking if "multinode-152500-m02" exists ...
	I0421 20:32:02.644127    7460 mustload.go:65] Loading cluster: multinode-152500
	I0421 20:32:02.644656    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:32:02.645335    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:32:04.879846    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:04.879846    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:04.879846    7460 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:32:04.881501    7460 api_server.go:166] Checking apiserver status ...
	I0421 20:32:04.894749    7460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:32:04.894749    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:32:07.106073    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:07.106326    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:07.106628    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:32:09.773080    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:32:09.773080    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:09.773472    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:32:09.897593    7460 command_runner.go:130] > 1865
	I0421 20:32:09.897593    7460 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.0028071s)
	I0421 20:32:09.913085    7460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1865/cgroup
	W0421 20:32:09.934993    7460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1865/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:32:09.949205    7460 ssh_runner.go:195] Run: ls
	I0421 20:32:09.958032    7460 api_server.go:253] Checking apiserver healthz at https://172.27.197.221:8443/healthz ...
	I0421 20:32:09.965828    7460 api_server.go:279] https://172.27.197.221:8443/healthz returned 200:
	ok
	I0421 20:32:09.978838    7460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-152500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0421 20:32:10.170284    7460 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-rkgsx, kube-system/kube-proxy-9zlm5
	I0421 20:32:13.203109    7460 command_runner.go:130] > node/multinode-152500-m02 cordoned
	I0421 20:32:13.203173    7460 command_runner.go:130] > pod "busybox-fc5497c4f-82tdr" has DeletionTimestamp older than 1 seconds, skipping
	I0421 20:32:13.203203    7460 command_runner.go:130] > node/multinode-152500-m02 drained
	I0421 20:32:13.203203    7460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-152500-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2243419s)
	I0421 20:32:13.203250    7460 node.go:128] successfully drained node "multinode-152500-m02"
	I0421 20:32:13.203333    7460 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0421 20:32:13.203333    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:32:15.366958    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:15.366958    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:15.366958    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:32:18.032592    7460 main.go:141] libmachine: [stdout =====>] : 172.27.194.200
	
	I0421 20:32:18.033610    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:18.033690    7460 sshutil.go:53] new ssh client: &{IP:172.27.194.200 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:32:18.517473    7460 command_runner.go:130] ! W0421 20:32:18.531283    1539 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0421 20:32:19.169586    7460 command_runner.go:130] ! W0421 20:32:19.182808    1539 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 474a506b27d5734448213d877b9514fbf7367bdb20aad63219c64d7241ce01ad: output: E0421 20:32:18.811371    1576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-82tdr_default\" network: cni config uninitialized" podSandboxID="474a506b27d5734448213d877b9514fbf7367bdb20aad63219c64d7241ce01ad"
	I0421 20:32:19.169586    7460 command_runner.go:130] ! time="2024-04-21T20:32:18Z" level=fatal msg="stopping the pod sandbox \"474a506b27d5734448213d877b9514fbf7367bdb20aad63219c64d7241ce01ad\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-82tdr_default\" network: cni config uninitialized"
	I0421 20:32:19.169586    7460 command_runner.go:130] ! : exit status 1
	I0421 20:32:19.205105    7460 command_runner.go:130] > [preflight] Running pre-flight checks
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Stopping the kubelet service
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0421 20:32:19.205393    7460 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0421 20:32:19.205393    7460 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0421 20:32:19.205393    7460 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0421 20:32:19.205521    7460 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0421 20:32:19.205521    7460 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0421 20:32:19.205521    7460 command_runner.go:130] > to reset your system's IPVS tables.
	I0421 20:32:19.205521    7460 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0421 20:32:19.205577    7460 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0421 20:32:19.205627    7460 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (6.0022504s)
	I0421 20:32:19.205681    7460 node.go:155] successfully reset node "multinode-152500-m02"
	I0421 20:32:19.207004    7460 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:32:19.207128    7460 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.197.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:32:19.208784    7460 cert_rotation.go:137] Starting client certificate rotation controller
	I0421 20:32:19.209248    7460 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0421 20:32:19.209277    7460 round_trippers.go:463] DELETE https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:19.209277    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:19.209277    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:19.209277    7460 round_trippers.go:473]     Content-Type: application/json
	I0421 20:32:19.209277    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:19.228158    7460 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0421 20:32:19.228630    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:19.228630    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:19.228704    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:19.228704    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:19.228704    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:19.228704    7460 round_trippers.go:580]     Content-Length: 171
	I0421 20:32:19.228704    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:19 GMT
	I0421 20:32:19.228704    7460 round_trippers.go:580]     Audit-Id: e8319bf1-d416-49b4-a060-10ce0eedf4e6
	I0421 20:32:19.228819    7460 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-152500-m02","kind":"nodes","uid":"6ea8c978-95ad-4dec-9c1d-d40201186799"}}
	I0421 20:32:19.228848    7460 node.go:180] successfully deleted node "multinode-152500-m02"
	I0421 20:32:19.228915    7460 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:32:19.228995    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0421 20:32:19.228995    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:32:21.391409    7460 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:32:21.391714    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:21.391838    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:32:24.001619    7460 main.go:141] libmachine: [stdout =====>] : 172.27.197.221
	
	I0421 20:32:24.002326    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:24.002512    7460 sshutil.go:53] new ssh client: &{IP:172.27.197.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:32:24.217262    7460 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 3fngkc.qbfp2gcb61j0uepy --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 
	I0421 20:32:24.217262    7460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9882309s)
	I0421 20:32:24.217262    7460 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:32:24.217262    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3fngkc.qbfp2gcb61j0uepy --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-152500-m02"
	I0421 20:32:24.458913    7460 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0421 20:32:25.867157    7460 command_runner.go:130] > [preflight] Running pre-flight checks
	I0421 20:32:25.867157    7460 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0421 20:32:25.867157    7460 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.00168565s
	I0421 20:32:25.867157    7460 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0421 20:32:25.867157    7460 command_runner.go:130] > This node has joined the cluster:
	I0421 20:32:25.867157    7460 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0421 20:32:25.867157    7460 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0421 20:32:25.867157    7460 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0421 20:32:25.867157    7460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3fngkc.qbfp2gcb61j0uepy --discovery-token-ca-cert-hash sha256:9d8606c096c1741b1e77008231f8fdcf23fc2cba394c66de0720f71a8d0cc9c0 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-152500-m02": (1.6498833s)
	I0421 20:32:25.867157    7460 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0421 20:32:26.112459    7460 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0421 20:32:26.331943    7460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-152500-m02 minikube.k8s.io/updated_at=2024_04_21T20_32_26_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6 minikube.k8s.io/name=multinode-152500 minikube.k8s.io/primary=false
	I0421 20:32:26.466184    7460 command_runner.go:130] > node/multinode-152500-m02 labeled
	I0421 20:32:26.466390    7460 start.go:318] duration metric: took 23.8232102s to joinCluster
	I0421 20:32:26.466544    7460 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.27.194.200 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0421 20:32:26.469391    7460 out.go:177] * Verifying Kubernetes components...
	I0421 20:32:26.467274    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:32:26.487259    7460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 20:32:26.728099    7460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0421 20:32:26.769003    7460 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 20:32:26.769983    7460 kapi.go:59] client config for multinode-152500: &rest.Config{Host:"https://172.27.197.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-152500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2875620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0421 20:32:26.770902    7460 node_ready.go:35] waiting up to 6m0s for node "multinode-152500-m02" to be "Ready" ...
	I0421 20:32:26.771091    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:26.771091    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:26.771091    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:26.771091    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:26.775859    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:26.775859    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:26.775859    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:26.775859    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:26.775859    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:26.775859    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:26.775859    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:26 GMT
	I0421 20:32:26.776153    7460 round_trippers.go:580]     Audit-Id: e8b44885-1606-43aa-b0a0-0cd6bb4e1f2b
	I0421 20:32:26.776461    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:27.285149    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:27.285365    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:27.285365    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:27.285365    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:27.289857    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:27.290002    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:27.290002    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:27.290002    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:27.290002    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:27.290114    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:27 GMT
	I0421 20:32:27.290114    7460 round_trippers.go:580]     Audit-Id: 40370ec9-5871-42c7-bd32-2a2700870a9d
	I0421 20:32:27.290114    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:27.290350    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:27.776551    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:27.776628    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:27.776683    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:27.776683    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:27.778939    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:32:27.778939    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:27.778939    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:27.778939    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:27.778939    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:27.778939    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:27.778939    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:27 GMT
	I0421 20:32:27.778939    7460 round_trippers.go:580]     Audit-Id: 8e3a1932-1292-4043-a2d8-6ed0b10ffd0c
	I0421 20:32:27.779888    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:28.284432    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:28.284432    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:28.284432    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:28.284432    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:28.289398    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:28.289398    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:28.289597    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:28.289597    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:28 GMT
	I0421 20:32:28.289597    7460 round_trippers.go:580]     Audit-Id: 2794ba7d-8409-4e04-8b01-81ca9f06a170
	I0421 20:32:28.289597    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:28.289597    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:28.289597    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:28.289962    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:28.771902    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:28.771902    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:28.771902    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:28.771902    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:28.778200    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:32:28.778200    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:28.778200    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:28 GMT
	I0421 20:32:28.778200    7460 round_trippers.go:580]     Audit-Id: ca2f4308-73a3-4dee-ba64-15abe28e771c
	I0421 20:32:28.778200    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:28.778200    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:28.778200    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:28.778200    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:28.778200    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:28.778853    7460 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:32:29.273070    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:29.273136    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:29.273136    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:29.273198    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:29.276994    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:29.277464    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:29.277464    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:29.277464    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:29.277464    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:29.277464    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:29.277464    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:29 GMT
	I0421 20:32:29.277464    7460 round_trippers.go:580]     Audit-Id: a46bac6a-33cd-4cc8-a2d9-0e64a2fd0733
	I0421 20:32:29.277657    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2079","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3566 chars]
	I0421 20:32:29.786054    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:29.786054    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:29.786054    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:29.786054    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:29.790703    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:29.790703    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:29.790703    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:29 GMT
	I0421 20:32:29.790703    7460 round_trippers.go:580]     Audit-Id: 04536091-7d9e-40db-bf5f-df1e4b42660b
	I0421 20:32:29.790703    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:29.790703    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:29.790703    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:29.790703    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:29.791069    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:30.272384    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:30.272384    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:30.272384    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:30.272384    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:30.276460    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:30.276460    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:30.276460    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:30.276460    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:30.277208    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:30.277208    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:30 GMT
	I0421 20:32:30.277208    7460 round_trippers.go:580]     Audit-Id: c267f12c-492a-48b1-a284-03cbfb187eee
	I0421 20:32:30.277208    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:30.277208    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:30.772074    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:30.772074    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:30.772074    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:30.772163    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:30.775506    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:30.776301    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:30.776301    7460 round_trippers.go:580]     Audit-Id: 17f4fa98-0eea-473b-9a10-55729f67331f
	I0421 20:32:30.776301    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:30.776301    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:30.776301    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:30.776301    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:30.776301    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:30 GMT
	I0421 20:32:30.776526    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:31.271342    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:31.271342    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:31.271598    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:31.271598    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:31.275856    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:31.276097    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:31.276097    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:31.276097    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:31 GMT
	I0421 20:32:31.276097    7460 round_trippers.go:580]     Audit-Id: 2143ae62-165f-450b-bef1-a95ac34ae7d1
	I0421 20:32:31.276097    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:31.276097    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:31.276097    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:31.276503    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:31.276503    7460 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:32:31.772444    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:31.772444    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:31.772444    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:31.772444    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:31.776662    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:31.776662    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:31.776662    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:31.776662    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:31.776662    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:31.776662    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:31.776662    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:31 GMT
	I0421 20:32:31.776662    7460 round_trippers.go:580]     Audit-Id: d7c9cea8-3f8f-4360-b108-559328b3b916
	I0421 20:32:31.776851    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:32.285917    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:32.285917    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:32.285917    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:32.285917    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:32.289619    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:32.289619    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:32.289619    7460 round_trippers.go:580]     Audit-Id: 3c9b8cbf-ecb8-4a6e-b30d-e47661e9a3c3
	I0421 20:32:32.289619    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:32.289619    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:32.289986    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:32.289986    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:32.289986    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:32 GMT
	I0421 20:32:32.290222    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:32.783494    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:32.783571    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:32.783571    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:32.783571    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:32.787500    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:32.787500    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:32.787500    7460 round_trippers.go:580]     Audit-Id: ee561472-1707-4fb9-b59b-7a69dbd14e5a
	I0421 20:32:32.787500    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:32.787500    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:32.787500    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:32.787500    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:32.787500    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:32 GMT
	I0421 20:32:32.787500    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:33.282752    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:33.283000    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:33.283000    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:33.283134    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:33.286619    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:33.286619    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:33.286619    7460 round_trippers.go:580]     Audit-Id: 9cfe0dc8-65d9-4070-991a-7d2f07239775
	I0421 20:32:33.286897    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:33.286897    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:33.286897    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:33.286897    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:33.286897    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:33 GMT
	I0421 20:32:33.287174    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:33.287949    7460 node_ready.go:53] node "multinode-152500-m02" has status "Ready":"False"
	I0421 20:32:33.783243    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:33.783243    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:33.783243    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:33.783243    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:33.786882    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:33.787593    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:33.787593    7460 round_trippers.go:580]     Audit-Id: 850b6f82-fe74-47ce-9b91-b23498514e2c
	I0421 20:32:33.787593    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:33.787593    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:33.787593    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:33.787593    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:33.787593    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:33 GMT
	I0421 20:32:33.787924    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2103","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3675 chars]
	I0421 20:32:34.284867    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:34.285179    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.285179    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.285179    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.288534    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.288534    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.288534    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.288534    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.288534    7460 round_trippers.go:580]     Audit-Id: e99fd2cb-d96a-4993-8030-e9576f7eff4e
	I0421 20:32:34.289471    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.289471    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.289471    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.290234    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2111","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3933 chars]
	I0421 20:32:34.290839    7460 node_ready.go:49] node "multinode-152500-m02" has status "Ready":"True"
	I0421 20:32:34.290902    7460 node_ready.go:38] duration metric: took 7.5199459s for node "multinode-152500-m02" to be "Ready" ...
	I0421 20:32:34.290902    7460 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:32:34.291012    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods
	I0421 20:32:34.291012    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.291012    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.291012    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.299020    7460 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0421 20:32:34.299020    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.299020    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.299020    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.299020    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.299020    7460 round_trippers.go:580]     Audit-Id: 5e460b6d-7590-45dc-94c5-85887051c028
	I0421 20:32:34.299214    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.299214    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.301119    7460 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2113"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86160 chars]
	I0421 20:32:34.305790    7460 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.305977    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v7pf8
	I0421 20:32:34.305977    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.306042    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.306042    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.308799    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:32:34.308799    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.308799    7460 round_trippers.go:580]     Audit-Id: f61bc0c3-7602-4237-a560-9392a6e1082b
	I0421 20:32:34.308799    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.309309    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.309309    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.309309    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.309309    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.309498    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v7pf8","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2973ebed-006d-4495-b1a7-7b4472e46f23","resourceVersion":"1940","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"b102db03-cc12-4969-bb15-76920f332a1b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b102db03-cc12-4969-bb15-76920f332a1b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0421 20:32:34.310166    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:34.310166    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.310222    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.310222    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.312996    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:32:34.312996    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.312996    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.312996    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.312996    7460 round_trippers.go:580]     Audit-Id: c9b60101-1b2f-4a5f-ac40-a9bedf14e455
	I0421 20:32:34.312996    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.312996    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.312996    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.313321    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:34.313740    7460 pod_ready.go:92] pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.313740    7460 pod_ready.go:81] duration metric: took 7.8864ms for pod "coredns-7db6d8ff4d-v7pf8" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.313740    7460 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.313740    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-152500
	I0421 20:32:34.313740    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.313740    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.313740    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.316321    7460 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0421 20:32:34.316321    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.316321    7460 round_trippers.go:580]     Audit-Id: f36c02a1-b7a9-4bd3-95fc-9dc8d1a377cb
	I0421 20:32:34.316321    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.316321    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.316321    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.316321    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.316321    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.317350    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-152500","namespace":"kube-system","uid":"437e0c4d-b43f-48c8-9fee-93e3e8a81c6d","resourceVersion":"1914","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.197.221:2379","kubernetes.io/config.hash":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.mirror":"790956de7c3c7f0a8c76863fd788c9d7","kubernetes.io/config.seen":"2024-04-21T20:29:40.589799517Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0421 20:32:34.317839    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:34.317918    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.317918    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.317918    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.325289    7460 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0421 20:32:34.325289    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.325289    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.325289    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.325289    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.325289    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.325289    7460 round_trippers.go:580]     Audit-Id: 47f345bf-830e-4337-bc1c-452e68ebb1f1
	I0421 20:32:34.325289    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.326144    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:34.326293    7460 pod_ready.go:92] pod "etcd-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.326293    7460 pod_ready.go:81] duration metric: took 12.5532ms for pod "etcd-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.326293    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.326293    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-152500
	I0421 20:32:34.326293    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.326293    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.326293    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.332303    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:34.332354    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.332354    7460 round_trippers.go:580]     Audit-Id: d76086e3-694f-48d6-8f06-25b42d015948
	I0421 20:32:34.332354    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.332354    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.332354    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.332354    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.332354    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.332682    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-152500","namespace":"kube-system","uid":"6e73294a-2a7d-4f05-beb1-bb011d5f1f52","resourceVersion":"1911","creationTimestamp":"2024-04-21T20:29:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.197.221:8443","kubernetes.io/config.hash":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.mirror":"3e7cca4b2f2b8b0a0de074d5af60c9fc","kubernetes.io/config.seen":"2024-04-21T20:29:40.518049422Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:29:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0421 20:32:34.333294    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:34.333294    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.333294    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.333294    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.336415    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.336502    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.336502    7460 round_trippers.go:580]     Audit-Id: 2cbe26b1-08ff-430c-85f8-f2d6b45e5842
	I0421 20:32:34.336502    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.336502    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.336502    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.336502    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.336502    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.336727    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:34.337005    7460 pod_ready.go:92] pod "kube-apiserver-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.337005    7460 pod_ready.go:81] duration metric: took 10.7121ms for pod "kube-apiserver-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.337005    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.337212    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-152500
	I0421 20:32:34.337212    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.337212    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.337212    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.340519    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.340519    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.340519    7460 round_trippers.go:580]     Audit-Id: 32834897-dff2-42d4-a7f7-13c52330cbe8
	I0421 20:32:34.340519    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.340519    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.340519    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.340519    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.340519    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.341505    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-152500","namespace":"kube-system","uid":"a3e58103-1fb6-4e2f-aa47-76f3a8cfd758","resourceVersion":"1946","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.mirror":"86a8d451597aa8be5ed66eeb5e3b235d","kubernetes.io/config.seen":"2024-04-21T20:05:53.333723813Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0421 20:32:34.341665    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:34.341665    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.341665    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.341665    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.345332    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.345332    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.345332    7460 round_trippers.go:580]     Audit-Id: 4468f400-bb90-44d1-9d29-847c8e76d5d6
	I0421 20:32:34.345332    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.345332    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.345332    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.345332    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.345332    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.345759    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:34.346211    7460 pod_ready.go:92] pod "kube-controller-manager-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.346211    7460 pod_ready.go:81] duration metric: took 9.2063ms for pod "kube-controller-manager-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.346211    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.488531    7460 request.go:629] Waited for 142.0547ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:32:34.488591    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zlm5
	I0421 20:32:34.488591    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.488591    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.488591    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.495446    7460 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0421 20:32:34.495446    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.495446    7460 round_trippers.go:580]     Audit-Id: 30ccac8c-0846-4f07-af48-114710d5e4a8
	I0421 20:32:34.495446    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.495446    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.495446    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.495446    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.495446    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.495758    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9zlm5","generateName":"kube-proxy-","namespace":"kube-system","uid":"61ba111b-28e9-40db-943d-22a595fdc27e","resourceVersion":"2092","creationTimestamp":"2024-04-21T20:09:11Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:09:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5842 chars]
	I0421 20:32:34.693875    7460 request.go:629] Waited for 197.9063ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:34.693965    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m02
	I0421 20:32:34.693965    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.693965    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.694037    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.698181    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.698181    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.698181    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.698181    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.698267    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.698267    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.698267    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.698267    7460 round_trippers.go:580]     Audit-Id: e5ad5945-4e3a-45cb-8f72-d56cc454ab6d
	I0421 20:32:34.698408    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m02","uid":"6e4cc52d-4edd-455a-b30b-ed0283559868","resourceVersion":"2115","creationTimestamp":"2024-04-21T20:32:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_32_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:32:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3813 chars]
	I0421 20:32:34.698851    7460 pod_ready.go:92] pod "kube-proxy-9zlm5" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:34.698851    7460 pod_ready.go:81] duration metric: took 352.6367ms for pod "kube-proxy-9zlm5" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.698851    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:34.895895    7460 request.go:629] Waited for 196.7928ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:32:34.896061    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kl8t2
	I0421 20:32:34.896185    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:34.896185    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:34.896185    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:34.899596    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:34.899596    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:34.900472    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:34.900472    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:34.900507    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:34.900507    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:34.900507    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:34 GMT
	I0421 20:32:34.900507    7460 round_trippers.go:580]     Audit-Id: 38d88449-f41a-40f7-8dac-f2ab810dccc9
	I0421 20:32:34.900701    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kl8t2","generateName":"kube-proxy-","namespace":"kube-system","uid":"4154de9b-3a7c-4ed6-a987-82b6d539dc7e","resourceVersion":"1893","creationTimestamp":"2024-04-21T20:06:07Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0421 20:32:35.098086    7460 request.go:629] Waited for 196.5585ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:35.098322    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:35.098322    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.098322    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.098322    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.103925    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:35.103925    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.103996    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.103996    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.103996    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.103996    7460 round_trippers.go:580]     Audit-Id: 62b42d63-a3f2-413a-9046-cfad777491fe
	I0421 20:32:35.103996    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.103996    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.104185    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:35.104709    7460 pod_ready.go:92] pod "kube-proxy-kl8t2" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:35.104709    7460 pod_ready.go:81] duration metric: took 405.8553ms for pod "kube-proxy-kl8t2" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:35.104709    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:35.288093    7460 request.go:629] Waited for 183.2145ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:32:35.288093    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sp699
	I0421 20:32:35.288303    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.288303    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.288303    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.293234    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:35.293449    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.293449    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.293449    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.293449    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.293449    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.293449    7460 round_trippers.go:580]     Audit-Id: 6c3a9768-3440-4669-b3d8-20d7f36eae33
	I0421 20:32:35.293449    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.293898    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sp699","generateName":"kube-proxy-","namespace":"kube-system","uid":"8eab29a5-b24b-4d2c-a829-fbf2770ef34c","resourceVersion":"1781","creationTimestamp":"2024-04-21T20:13:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f3a74a6a-f36b-4abe-b88a-050de0ddef12","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:13:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3a74a6a-f36b-4abe-b88a-050de0ddef12\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0421 20:32:35.489801    7460 request.go:629] Waited for 195.016ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:32:35.490036    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500-m03
	I0421 20:32:35.490036    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.490036    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.490036    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.517787    7460 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0421 20:32:35.518132    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.518132    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.518132    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.518132    7460 round_trippers.go:580]     Audit-Id: 7bcc2f31-f404-44e1-a83d-ddc1b5ed7a0e
	I0421 20:32:35.518132    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.518132    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.518132    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.518636    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500-m03","uid":"9c2fb882-be16-4c12-815f-4dd3e35c66ee","resourceVersion":"1953","creationTimestamp":"2024-04-21T20:25:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_04_21T20_25_05_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:25:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4399 chars]
	I0421 20:32:35.519134    7460 pod_ready.go:97] node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:32:35.519205    7460 pod_ready.go:81] duration metric: took 414.4934ms for pod "kube-proxy-sp699" in "kube-system" namespace to be "Ready" ...
	E0421 20:32:35.519205    7460 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-152500-m03" hosting pod "kube-proxy-sp699" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-152500-m03" has status "Ready":"Unknown"
	I0421 20:32:35.519205    7460 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:35.693604    7460 request.go:629] Waited for 174.1432ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:32:35.693691    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-152500
	I0421 20:32:35.693691    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.693691    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.693691    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.697456    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:35.697456    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.697456    7460 round_trippers.go:580]     Audit-Id: 67ea80ea-d2be-4b34-8482-08d7825f3566
	I0421 20:32:35.697456    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.697456    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.697456    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.697456    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.697456    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.697842    7460 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-152500","namespace":"kube-system","uid":"8178553d-7f1d-423a-89e5-41b226b2bb6d","resourceVersion":"1907","creationTimestamp":"2024-04-21T20:05:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.mirror":"0aef6e4e48dde930e5589d93194dc8e3","kubernetes.io/config.seen":"2024-04-21T20:05:53.333724913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-21T20:05:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0421 20:32:35.896861    7460 request.go:629] Waited for 198.1194ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:35.896861    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes/multinode-152500
	I0421 20:32:35.896861    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:35.896861    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:35.896861    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:35.900475    7460 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0421 20:32:35.900475    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:35.900475    7460 round_trippers.go:580]     Audit-Id: 8f2956c7-eb6a-43d9-bead-d5705ba6ccb3
	I0421 20:32:35.900999    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:35.900999    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:35.900999    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:35.900999    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:35.900999    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:35 GMT
	I0421 20:32:35.904123    7460 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-04-21T20:05:49Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0421 20:32:35.904709    7460 pod_ready.go:92] pod "kube-scheduler-multinode-152500" in "kube-system" namespace has status "Ready":"True"
	I0421 20:32:35.904850    7460 pod_ready.go:81] duration metric: took 385.6414ms for pod "kube-scheduler-multinode-152500" in "kube-system" namespace to be "Ready" ...
	I0421 20:32:35.904850    7460 pod_ready.go:38] duration metric: took 1.6138733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0421 20:32:35.904850    7460 system_svc.go:44] waiting for kubelet service to be running ....
	I0421 20:32:35.919356    7460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:32:35.950184    7460 system_svc.go:56] duration metric: took 45.3343ms WaitForService to wait for kubelet
	I0421 20:32:35.950184    7460 kubeadm.go:576] duration metric: took 9.483526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0421 20:32:35.950184    7460 node_conditions.go:102] verifying NodePressure condition ...
	I0421 20:32:36.099311    7460 request.go:629] Waited for 148.8943ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.197.221:8443/api/v1/nodes
	I0421 20:32:36.099311    7460 round_trippers.go:463] GET https://172.27.197.221:8443/api/v1/nodes
	I0421 20:32:36.099311    7460 round_trippers.go:469] Request Headers:
	I0421 20:32:36.099311    7460 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0421 20:32:36.099311    7460 round_trippers.go:473]     Accept: application/json, */*
	I0421 20:32:36.104103    7460 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0421 20:32:36.104103    7460 round_trippers.go:577] Response Headers:
	I0421 20:32:36.104103    7460 round_trippers.go:580]     Audit-Id: 58c3fa2b-603b-4471-add7-02d84c94417e
	I0421 20:32:36.104103    7460 round_trippers.go:580]     Cache-Control: no-cache, private
	I0421 20:32:36.104103    7460 round_trippers.go:580]     Content-Type: application/json
	I0421 20:32:36.104103    7460 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 67bc6d91-b24e-48be-9eb4-dc6d6e9c57d8
	I0421 20:32:36.104103    7460 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8310a4f-2f77-4499-bf0a-55ead1a2a13f
	I0421 20:32:36.104103    7460 round_trippers.go:580]     Date: Sun, 21 Apr 2024 20:32:36 GMT
	I0421 20:32:36.105379    7460 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2118"},"items":[{"metadata":{"name":"multinode-152500","uid":"02d74e28-878e-4578-b7a7-8bf57f1510ad","resourceVersion":"1908","creationTimestamp":"2024-04-21T20:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-152500","kubernetes.io/os":"linux","minikube.k8s.io/commit":"702dd7d90cdd919eaa4a48319794ed80d5b956e6","minikube.k8s.io/name":"multinode-152500","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_04_21T20_05_54_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15489 chars]
	I0421 20:32:36.105910    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:32:36.105910    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:32:36.105910    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:32:36.105910    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:32:36.105910    7460 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0421 20:32:36.105910    7460 node_conditions.go:123] node cpu capacity is 2
	I0421 20:32:36.105910    7460 node_conditions.go:105] duration metric: took 155.7245ms to run NodePressure ...
	I0421 20:32:36.105910    7460 start.go:240] waiting for startup goroutines ...
	I0421 20:32:36.105910    7460 start.go:254] writing updated cluster config ...
	I0421 20:32:36.109963    7460 out.go:177] 
	I0421 20:32:36.114910    7460 config.go:182] Loaded profile config "ha-736000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:32:36.122726    7460 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:32:36.122726    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:32:36.129569    7460 out.go:177] * Starting "multinode-152500-m03" worker node in "multinode-152500" cluster
	I0421 20:32:36.131604    7460 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 20:32:36.131604    7460 cache.go:56] Caching tarball of preloaded images
	I0421 20:32:36.132225    7460 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 20:32:36.132225    7460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 20:32:36.132225    7460 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-152500\config.json ...
	I0421 20:32:36.140239    7460 start.go:360] acquireMachinesLock for multinode-152500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 20:32:36.140443    7460 start.go:364] duration metric: took 98.4µs to acquireMachinesLock for "multinode-152500-m03"
	I0421 20:32:36.140687    7460 start.go:96] Skipping create...Using existing machine configuration
	I0421 20:32:36.140767    7460 fix.go:54] fixHost starting: m03
	I0421 20:32:36.141586    7460 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m03 ).state
	I0421 20:32:38.261989    7460 main.go:141] libmachine: [stdout =====>] : Off
	
	I0421 20:32:38.262248    7460 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:32:38.262324    7460 fix.go:112] recreateIfNeeded on multinode-152500-m03: state=Stopped err=<nil>
	W0421 20:32:38.262324    7460 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 20:32:38.265870    7460 out.go:177] * Restarting existing hyperv VM for "multinode-152500-m03" ...
	
	
	==> Docker <==
	Apr 21 20:29:57 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:57.810157127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:29:57 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:57.810176914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:29:57 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:57.810612625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:29:57 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:57.821831375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:29:57 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:57.822227712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:29:57 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:57.822469751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:29:57 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:57.822833709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:29:58 multinode-152500 cri-dockerd[1278]: time="2024-04-21T20:29:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ed1d97af88def39a02c7e476eebe3776522f43530eda66048ce667b50daac3e3/resolv.conf as [nameserver 172.27.192.1]"
	Apr 21 20:29:58 multinode-152500 cri-dockerd[1278]: time="2024-04-21T20:29:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a73553149e99b301322a952a27caac3060ab3742bff47e0a8aeb2b9b4e875c6e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 21 20:29:58 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:58.476545996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:29:58 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:58.476979127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:29:58 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:58.477033493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:29:58 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:58.477234569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:29:58 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:58.501028507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:29:58 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:58.501783139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:29:58 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:58.501814819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:29:58 multinode-152500 dockerd[1057]: time="2024-04-21T20:29:58.502012297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:30:21 multinode-152500 dockerd[1057]: time="2024-04-21T20:30:21.358797734Z" level=info msg="shim disconnected" id=b7310952b3e31ac7a16df4a7f3267eecf905d2ee024779078b67b15e9e025d19 namespace=moby
	Apr 21 20:30:21 multinode-152500 dockerd[1057]: time="2024-04-21T20:30:21.359215123Z" level=warning msg="cleaning up after shim disconnected" id=b7310952b3e31ac7a16df4a7f3267eecf905d2ee024779078b67b15e9e025d19 namespace=moby
	Apr 21 20:30:21 multinode-152500 dockerd[1057]: time="2024-04-21T20:30:21.359233623Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 20:30:21 multinode-152500 dockerd[1051]: time="2024-04-21T20:30:21.361381570Z" level=info msg="ignoring event" container=b7310952b3e31ac7a16df4a7f3267eecf905d2ee024779078b67b15e9e025d19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 20:30:36 multinode-152500 dockerd[1057]: time="2024-04-21T20:30:36.883556080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 20:30:36 multinode-152500 dockerd[1057]: time="2024-04-21T20:30:36.883758376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 20:30:36 multinode-152500 dockerd[1057]: time="2024-04-21T20:30:36.883775776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 20:30:36 multinode-152500 dockerd[1057]: time="2024-04-21T20:30:36.884215268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b7916704e2ba       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   e07535c9ce59e       storage-provisioner
	ff93038ab09f0       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   a73553149e99b       busybox-fc5497c4f-l6544
	0a4d44d7315aa       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   ed1d97af88def       coredns-7db6d8ff4d-v7pf8
	542cb6892ab35       4950bb10b3f87                                                                                         3 minutes ago       Running             kindnet-cni               1                   34a89c7eae445       kindnet-vb8ws
	b7310952b3e31       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   e07535c9ce59e       storage-provisioner
	22d202d1d9609       a0bf559e280cf                                                                                         3 minutes ago       Running             kube-proxy                1                   390673a07821f       kube-proxy-kl8t2
	22e01fabc1dc3       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      0                   777e41e6206b2       etcd-multinode-152500
	0bd7755f45198       c42f13656d0b2                                                                                         3 minutes ago       Running             kube-apiserver            0                   c613fb63846c8       kube-apiserver-multinode-152500
	bd8e6767148d1       259c8277fcbbc                                                                                         3 minutes ago       Running             kube-scheduler            1                   b33efa8c6ed64       kube-scheduler-multinode-152500
	e8ccaad100dd9       c7aad43836fa5                                                                                         3 minutes ago       Running             kube-controller-manager   1                   b7cf88084c483       kube-controller-manager-multinode-152500
	278fdd61d87c0       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   3cc4feec2773e       busybox-fc5497c4f-l6544
	a6fab3c7e2816       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   d6ef972126a90       coredns-7db6d8ff4d-v7pf8
	ad328e25a9d02       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Exited              kindnet-cni               0                   0e66350415f0c       kindnet-vb8ws
	7f128889bd612       a0bf559e280cf                                                                                         26 minutes ago      Exited              kube-proxy                0                   a3675838aa7c8       kube-proxy-kl8t2
	0bd5af3b1831b       259c8277fcbbc                                                                                         27 minutes ago      Exited              kube-scheduler            0                   b0eb5fe004810       kube-scheduler-multinode-152500
	0690342790fe5       c7aad43836fa5                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   e6ae7d993bb91       kube-controller-manager-multinode-152500
	
	
	==> coredns [0a4d44d7315a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = de98fc7f457fe7491daa3d9d76be7f71a9abd25d984af2a62bb46996cb08a67c43ff1cd584a40d0f2ba65c174ae12856de0f6ecf594962c6086de5d30a624a4a
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34851 - 64600 "HINFO IN 8111908546522325039.7613198986804460916. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073940074s
	
	
	==> coredns [a6fab3c7e281] <==
	[INFO] 10.244.0.3:52240 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117501s
	[INFO] 10.244.0.3:37053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001616s
	[INFO] 10.244.0.3:37130 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000252701s
	[INFO] 10.244.0.3:56209 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000306401s
	[INFO] 10.244.0.3:41964 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117s
	[INFO] 10.244.0.3:39822 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001671s
	[INFO] 10.244.0.3:48735 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001334s
	[INFO] 10.244.1.2:44124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002673s
	[INFO] 10.244.1.2:39375 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000849s
	[INFO] 10.244.1.2:47331 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000754s
	[INFO] 10.244.1.2:33685 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000723s
	[INFO] 10.244.0.3:49605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001058s
	[INFO] 10.244.0.3:54097 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000798s
	[INFO] 10.244.0.3:59400 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085901s
	[INFO] 10.244.0.3:38777 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001295s
	[INFO] 10.244.1.2:46340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116s
	[INFO] 10.244.1.2:38103 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173401s
	[INFO] 10.244.1.2:56467 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001767s
	[INFO] 10.244.1.2:35140 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095401s
	[INFO] 10.244.0.3:56335 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002217s
	[INFO] 10.244.0.3:59693 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126301s
	[INFO] 10.244.0.3:33936 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000798s
	[INFO] 10.244.0.3:33049 - 5 "PTR IN 1.192.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000631802s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-152500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-152500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-152500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_21T20_05_54_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 20:05:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-152500
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:33:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:29:55 +0000   Sun, 21 Apr 2024 20:05:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:29:55 +0000   Sun, 21 Apr 2024 20:05:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:29:55 +0000   Sun, 21 Apr 2024 20:05:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:29:55 +0000   Sun, 21 Apr 2024 20:29:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.197.221
	  Hostname:    multinode-152500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d1470ecfcf94484be7a2b080a09e57b
	  System UUID:                f600d953-6b53-3d42-a020-58dc7452e9bc
	  Boot ID:                    f566da6b-bf00-4967-9376-ec07f066cde3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l6544                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-v7pf8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-152500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m14s
	  kube-system                 kindnet-vb8ws                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-152500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 kube-controller-manager-multinode-152500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-kl8t2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-152500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 3m11s                  kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-152500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-152500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-152500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     27m (x2 over 27m)      kubelet          Node multinode-152500 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m (x2 over 27m)      kubelet          Node multinode-152500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x2 over 27m)      kubelet          Node multinode-152500 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node multinode-152500 event: Registered Node multinode-152500 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-152500 status is now: NodeReady
	  Normal  Starting                 3m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m23s (x8 over 3m23s)  kubelet          Node multinode-152500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x8 over 3m23s)  kubelet          Node multinode-152500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x7 over 3m23s)  kubelet          Node multinode-152500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m4s                   node-controller  Node multinode-152500 event: Registered Node multinode-152500 in Controller
	
	
	Name:               multinode-152500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-152500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-152500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T20_32_26_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 20:32:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-152500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:32:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Apr 2024 20:32:33 +0000   Sun, 21 Apr 2024 20:32:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Apr 2024 20:32:33 +0000   Sun, 21 Apr 2024 20:32:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Apr 2024 20:32:33 +0000   Sun, 21 Apr 2024 20:32:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Apr 2024 20:32:33 +0000   Sun, 21 Apr 2024 20:32:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.194.200
	  Hostname:    multinode-152500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a129f3235694c91855eb889933bc39a
	  System UUID:                878d0256-95a4-6549-a6bd-12de64a17f7c
	  Boot ID:                    62bfe781-f2ae-4b3d-94fc-d0557198f653
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9hrrp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kindnet-rkgsx              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-9zlm5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 35s                kube-proxy       
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-152500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-152500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-152500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node multinode-152500-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  38s (x2 over 38s)  kubelet          Node multinode-152500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x2 over 38s)  kubelet          Node multinode-152500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x2 over 38s)  kubelet          Node multinode-152500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                node-controller  Node multinode-152500-m02 event: Registered Node multinode-152500-m02 in Controller
	  Normal  NodeReady                30s                kubelet          Node multinode-152500-m02 status is now: NodeReady
	
	
	Name:               multinode-152500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-152500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=702dd7d90cdd919eaa4a48319794ed80d5b956e6
	                    minikube.k8s.io/name=multinode-152500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_21T20_25_05_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Apr 2024 20:25:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-152500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Apr 2024 20:26:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 21 Apr 2024 20:25:10 +0000   Sun, 21 Apr 2024 20:26:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 21 Apr 2024 20:25:10 +0000   Sun, 21 Apr 2024 20:26:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 21 Apr 2024 20:25:10 +0000   Sun, 21 Apr 2024 20:26:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 21 Apr 2024 20:25:10 +0000   Sun, 21 Apr 2024 20:26:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.27.193.99
	  Hostname:    multinode-152500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 daa525f8728b4f2b9eae7156c5160c64
	  System UUID:                0f0f8b21-6c94-234a-956b-6fa0c8ab1cba
	  Boot ID:                    d4f8bc40-0a31-4fd1-a2a2-683ec708a564
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kvd8z       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-sp699    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 7m55s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-152500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-152500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-152500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-152500-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m59s (x2 over 7m59s)  kubelet          Node multinode-152500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m59s (x2 over 7m59s)  kubelet          Node multinode-152500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m59s (x2 over 7m59s)  kubelet          Node multinode-152500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m56s                  node-controller  Node multinode-152500-m03 event: Registered Node multinode-152500-m03 in Controller
	  Normal  NodeReady                7m53s                  kubelet          Node multinode-152500-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m16s                  node-controller  Node multinode-152500-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m4s                   node-controller  Node multinode-152500-m03 event: Registered Node multinode-152500-m03 in Controller
	
	
	==> dmesg <==
	[  +0.790338] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[Apr21 20:28] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.768965] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr21 20:29] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.201958] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[ +27.595904] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.115364] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.648852] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +0.245872] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	[  +0.293836] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	[  +3.064411] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.251242] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.235898] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	[  +0.319046] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.116652] kauditd_printk_skb: 183 callbacks suppressed
	[  +0.912860] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +4.112217] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +0.117336] kauditd_printk_skb: 34 callbacks suppressed
	[ +10.126305] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.206338] systemd-fstab-generator[2190]: Ignoring "noauto" option for root device
	[  +6.268426] kauditd_printk_skb: 70 callbacks suppressed
	[Apr21 20:30] kauditd_printk_skb: 14 callbacks suppressed
	[Apr21 20:32] hrtimer: interrupt took 1245703 ns
	
	
	==> etcd [22e01fabc1dc] <==
	{"level":"info","ts":"2024-04-21T20:29:42.840289Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T20:29:42.842279Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-21T20:29:42.843283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4669ade0122dcab switched to configuration voters=(17610933671171382443)"}
	{"level":"info","ts":"2024-04-21T20:29:42.844357Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"534b43d477b3e118","local-member-id":"f4669ade0122dcab","added-peer-id":"f4669ade0122dcab","added-peer-peer-urls":["https://172.27.198.190:2380"]}
	{"level":"info","ts":"2024-04-21T20:29:42.844982Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"534b43d477b3e118","local-member-id":"f4669ade0122dcab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T20:29:42.843761Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-21T20:29:42.843787Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.197.221:2380"}
	{"level":"info","ts":"2024-04-21T20:29:42.845857Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.197.221:2380"}
	{"level":"info","ts":"2024-04-21T20:29:42.846166Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-21T20:29:42.846454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f4669ade0122dcab","initial-advertise-peer-urls":["https://172.27.197.221:2380"],"listen-peer-urls":["https://172.27.197.221:2380"],"advertise-client-urls":["https://172.27.197.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.197.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-21T20:29:42.846484Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-21T20:29:44.27231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4669ade0122dcab is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-21T20:29:44.273015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4669ade0122dcab became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-21T20:29:44.273359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4669ade0122dcab received MsgPreVoteResp from f4669ade0122dcab at term 2"}
	{"level":"info","ts":"2024-04-21T20:29:44.273564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4669ade0122dcab became candidate at term 3"}
	{"level":"info","ts":"2024-04-21T20:29:44.273784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4669ade0122dcab received MsgVoteResp from f4669ade0122dcab at term 3"}
	{"level":"info","ts":"2024-04-21T20:29:44.274017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4669ade0122dcab became leader at term 3"}
	{"level":"info","ts":"2024-04-21T20:29:44.274326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4669ade0122dcab elected leader f4669ade0122dcab at term 3"}
	{"level":"info","ts":"2024-04-21T20:29:44.286274Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f4669ade0122dcab","local-member-attributes":"{Name:multinode-152500 ClientURLs:[https://172.27.197.221:2379]}","request-path":"/0/members/f4669ade0122dcab/attributes","cluster-id":"534b43d477b3e118","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-21T20:29:44.286551Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T20:29:44.289493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-21T20:29:44.295449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-21T20:29:44.302151Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-21T20:29:44.302564Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-21T20:29:44.303294Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.197.221:2379"}
	
	
	==> kernel <==
	 20:33:03 up 5 min,  0 users,  load average: 0.27, 0.24, 0.10
	Linux multinode-152500 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [542cb6892ab3] <==
	I0421 20:32:33.408817       1 main.go:227] handling current node
	I0421 20:32:33.408833       1 main.go:223] Handling node with IPs: map[172.27.194.200:{}]
	I0421 20:32:33.408842       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:32:33.409258       1 routes.go:54] Removing invalid route {Ifindex: 3 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.27.195.108 Flags: [] Table: 254}
	I0421 20:32:33.409459       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.27.194.200 Flags: [] Table: 0} 
	I0421 20:32:33.409550       1 main.go:223] Handling node with IPs: map[172.27.193.99:{}]
	I0421 20:32:33.409768       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	I0421 20:32:43.421981       1 main.go:223] Handling node with IPs: map[172.27.197.221:{}]
	I0421 20:32:43.422027       1 main.go:227] handling current node
	I0421 20:32:43.422049       1 main.go:223] Handling node with IPs: map[172.27.194.200:{}]
	I0421 20:32:43.422240       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:32:43.422413       1 main.go:223] Handling node with IPs: map[172.27.193.99:{}]
	I0421 20:32:43.422433       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	I0421 20:32:53.459447       1 main.go:223] Handling node with IPs: map[172.27.197.221:{}]
	I0421 20:32:53.459553       1 main.go:227] handling current node
	I0421 20:32:53.459569       1 main.go:223] Handling node with IPs: map[172.27.194.200:{}]
	I0421 20:32:53.459594       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:32:53.460470       1 main.go:223] Handling node with IPs: map[172.27.193.99:{}]
	I0421 20:32:53.460555       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	I0421 20:33:03.466517       1 main.go:223] Handling node with IPs: map[172.27.197.221:{}]
	I0421 20:33:03.466547       1 main.go:227] handling current node
	I0421 20:33:03.466560       1 main.go:223] Handling node with IPs: map[172.27.194.200:{}]
	I0421 20:33:03.466567       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:33:03.466696       1 main.go:223] Handling node with IPs: map[172.27.193.99:{}]
	I0421 20:33:03.466705       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ad328e25a9d0] <==
	I0421 20:26:28.595757       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	I0421 20:26:38.609511       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:26:38.609863       1 main.go:227] handling current node
	I0421 20:26:38.610192       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:26:38.610329       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:26:38.610648       1 main.go:223] Handling node with IPs: map[172.27.193.99:{}]
	I0421 20:26:38.610744       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	I0421 20:26:48.631996       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:26:48.632339       1 main.go:227] handling current node
	I0421 20:26:48.632594       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:26:48.632714       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:26:48.633041       1 main.go:223] Handling node with IPs: map[172.27.193.99:{}]
	I0421 20:26:48.633270       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	I0421 20:26:58.654360       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:26:58.654467       1 main.go:227] handling current node
	I0421 20:26:58.654484       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:26:58.654494       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:26:58.654709       1 main.go:223] Handling node with IPs: map[172.27.193.99:{}]
	I0421 20:26:58.654955       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	I0421 20:27:08.691039       1 main.go:223] Handling node with IPs: map[172.27.198.190:{}]
	I0421 20:27:08.691068       1 main.go:227] handling current node
	I0421 20:27:08.691080       1 main.go:223] Handling node with IPs: map[172.27.195.108:{}]
	I0421 20:27:08.691088       1 main.go:250] Node multinode-152500-m02 has CIDR [10.244.1.0/24] 
	I0421 20:27:08.691208       1 main.go:223] Handling node with IPs: map[172.27.193.99:{}]
	I0421 20:27:08.691216       1 main.go:250] Node multinode-152500-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0bd7755f4519] <==
	I0421 20:29:46.442732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0421 20:29:46.442782       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0421 20:29:46.442878       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0421 20:29:46.443362       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0421 20:29:46.452862       1 aggregator.go:165] initial CRD sync complete...
	I0421 20:29:46.452900       1 autoregister_controller.go:141] Starting autoregister controller
	I0421 20:29:46.452909       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0421 20:29:46.452915       1 cache.go:39] Caches are synced for autoregister controller
	I0421 20:29:46.454432       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0421 20:29:46.473148       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0421 20:29:46.474815       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0421 20:29:46.475726       1 policy_source.go:224] refreshing policies
	I0421 20:29:46.494772       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0421 20:29:46.496415       1 shared_informer.go:320] Caches are synced for configmaps
	I0421 20:29:46.546451       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0421 20:29:47.295930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0421 20:29:47.740117       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.197.221 172.27.198.190]
	I0421 20:29:47.742215       1 controller.go:615] quota admission added evaluator for: endpoints
	I0421 20:29:47.751560       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0421 20:29:49.153292       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0421 20:29:49.365773       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0421 20:29:49.392554       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0421 20:29:49.514749       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0421 20:29:49.526993       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0421 20:30:07.736282       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.27.197.221]
	
	
	==> kube-controller-manager [0690342790fe] <==
	I0421 20:09:11.051450       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-152500-m02\" does not exist"
	I0421 20:09:11.075589       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-152500-m02" podCIDRs=["10.244.1.0/24"]
	I0421 20:09:11.846696       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-152500-m02"
	I0421 20:09:29.719456       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:09:56.628625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.387442ms"
	I0421 20:09:56.669605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.858655ms"
	I0421 20:09:56.670085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0421 20:09:56.670437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.1µs"
	I0421 20:09:59.481408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.470647ms"
	I0421 20:09:59.497553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.105139ms"
	I0421 20:09:59.497729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.4µs"
	I0421 20:13:58.521055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:13:58.524094       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-152500-m03\" does not exist"
	I0421 20:13:58.586973       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-152500-m03" podCIDRs=["10.244.2.0/24"]
	I0421 20:14:01.933166       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-152500-m03"
	I0421 20:14:21.320416       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:22:12.075969       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:24:57.863997       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:25:04.547429       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:25:04.549374       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-152500-m03\" does not exist"
	I0421 20:25:04.586509       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-152500-m03" podCIDRs=["10.244.3.0/24"]
	I0421 20:25:10.760760       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m03"
	I0421 20:26:47.214572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:27:07.932952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.377857ms"
	I0421 20:27:07.934909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="152.902µs"
	
	
	==> kube-controller-manager [e8ccaad100dd] <==
	I0421 20:29:59.795506       1 shared_informer.go:320] Caches are synced for resource quota
	I0421 20:29:59.813446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.446679ms"
	I0421 20:29:59.813989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.997µs"
	I0421 20:29:59.816706       1 shared_informer.go:320] Caches are synced for cronjob
	I0421 20:29:59.863600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.475818ms"
	I0421 20:29:59.863928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.298µs"
	I0421 20:30:00.232696       1 shared_informer.go:320] Caches are synced for garbage collector
	I0421 20:30:00.256098       1 shared_informer.go:320] Caches are synced for garbage collector
	I0421 20:30:00.256420       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0421 20:32:10.240798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.434726ms"
	I0421 20:32:10.261176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.303711ms"
	I0421 20:32:10.261286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.4µs"
	E0421 20:32:19.596885       1 gc_controller.go:153] "Failed to get node" err="node \"multinode-152500-m02\" not found" logger="pod-garbage-collector-controller" node="multinode-152500-m02"
	I0421 20:32:25.789212       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-152500-m02\" does not exist"
	I0421 20:32:25.803451       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-152500-m02" podCIDRs=["10.244.1.0/24"]
	I0421 20:32:26.742914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.2µs"
	I0421 20:32:33.898819       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-152500-m02"
	I0421 20:32:33.961882       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.6µs"
	I0421 20:32:41.775140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="139.901µs"
	I0421 20:32:41.783350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.3µs"
	I0421 20:32:41.806472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.4µs"
	I0421 20:32:42.019602       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.9µs"
	I0421 20:32:42.027910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.1µs"
	I0421 20:32:44.115263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.246168ms"
	I0421 20:32:44.117376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.4µs"
	
	
	==> kube-proxy [22d202d1d960] <==
	I0421 20:29:51.508123       1 server_linux.go:69] "Using iptables proxy"
	I0421 20:29:51.593674       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.197.221"]
	I0421 20:29:51.762324       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 20:29:51.762366       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 20:29:51.762387       1 server_linux.go:165] "Using iptables Proxier"
	I0421 20:29:51.779873       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 20:29:51.780717       1 server.go:872] "Version info" version="v1.30.0"
	I0421 20:29:51.780739       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 20:29:51.783983       1 config.go:192] "Starting service config controller"
	I0421 20:29:51.784128       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 20:29:51.784516       1 config.go:319] "Starting node config controller"
	I0421 20:29:51.784693       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 20:29:51.784957       1 config.go:101] "Starting endpoint slice config controller"
	I0421 20:29:51.788985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 20:29:51.885376       1 shared_informer.go:320] Caches are synced for node config
	I0421 20:29:51.885414       1 shared_informer.go:320] Caches are synced for service config
	I0421 20:29:51.894907       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [7f128889bd61] <==
	I0421 20:06:08.871442       1 server_linux.go:69] "Using iptables proxy"
	I0421 20:06:08.919143       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.27.198.190"]
	I0421 20:06:08.999885       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0421 20:06:09.000253       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0421 20:06:09.000550       1 server_linux.go:165] "Using iptables Proxier"
	I0421 20:06:09.006102       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0421 20:06:09.008607       1 server.go:872] "Version info" version="v1.30.0"
	I0421 20:06:09.008971       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 20:06:09.013742       1 config.go:192] "Starting service config controller"
	I0421 20:06:09.014250       1 config.go:101] "Starting endpoint slice config controller"
	I0421 20:06:09.015000       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0421 20:06:09.015212       1 config.go:319] "Starting node config controller"
	I0421 20:06:09.020499       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0421 20:06:09.015112       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0421 20:06:09.120519       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0421 20:06:09.121078       1 shared_informer.go:320] Caches are synced for service config
	I0421 20:06:09.121101       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0bd5af3b1831] <==
	E0421 20:05:51.012043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0421 20:05:51.038577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0421 20:05:51.038737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0421 20:05:51.067122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:51.067226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:51.077278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0421 20:05:51.077955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0421 20:05:51.189663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0421 20:05:51.190622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0421 20:05:51.259498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0421 20:05:51.259866       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0421 20:05:51.289701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0421 20:05:51.290247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0421 20:05:51.312769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:51.313151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:51.317544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0421 20:05:51.317832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0421 20:05:51.395001       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0421 20:05:51.395127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0421 20:05:51.575075       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0421 20:05:51.575156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0421 20:05:51.605406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0421 20:05:51.606239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0421 20:05:52.716384       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0421 20:27:08.966933       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bd8e6767148d] <==
	I0421 20:29:44.007838       1 serving.go:380] Generated self-signed cert in-memory
	I0421 20:29:46.480243       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0421 20:29:46.480346       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0421 20:29:46.493856       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0421 20:29:46.493994       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0421 20:29:46.494027       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0421 20:29:46.494149       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0421 20:29:46.497345       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0421 20:29:46.497382       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0421 20:29:46.497400       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0421 20:29:46.497408       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0421 20:29:46.597293       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0421 20:29:46.598258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0421 20:29:46.599904       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Apr 21 20:29:53 multinode-152500 kubelet[1517]: E0421 20:29:53.281813    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62c649d2-6713-4642-96dc-8533faeb750f-kube-api-access-jsv5t podName:62c649d2-6713-4642-96dc-8533faeb750f nodeName:}" failed. No retries permitted until 2024-04-21 20:29:57.281795463 +0000 UTC m=+16.953053119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jsv5t" (UniqueName: "kubernetes.io/projected/62c649d2-6713-4642-96dc-8533faeb750f-kube-api-access-jsv5t") pod "busybox-fc5497c4f-l6544" (UID: "62c649d2-6713-4642-96dc-8533faeb750f") : object "default"/"kube-root-ca.crt" not registered
	Apr 21 20:29:53 multinode-152500 kubelet[1517]: E0421 20:29:53.645956    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-l6544" podUID="62c649d2-6713-4642-96dc-8533faeb750f"
	Apr 21 20:29:53 multinode-152500 kubelet[1517]: E0421 20:29:53.646180    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-v7pf8" podUID="2973ebed-006d-4495-b1a7-7b4472e46f23"
	Apr 21 20:29:55 multinode-152500 kubelet[1517]: I0421 20:29:55.390229    1517 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Apr 21 20:30:22 multinode-152500 kubelet[1517]: I0421 20:30:22.126253    1517 scope.go:117] "RemoveContainer" containerID="bc85f90f7b1856c441254ebbab5b3e06bd85825c14ea9c275582fb785a72a591"
	Apr 21 20:30:22 multinode-152500 kubelet[1517]: I0421 20:30:22.126752    1517 scope.go:117] "RemoveContainer" containerID="b7310952b3e31ac7a16df4a7f3267eecf905d2ee024779078b67b15e9e025d19"
	Apr 21 20:30:22 multinode-152500 kubelet[1517]: E0421 20:30:22.127161    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2eea731d-6a0b-4404-8518-a088d879b487)\"" pod="kube-system/storage-provisioner" podUID="2eea731d-6a0b-4404-8518-a088d879b487"
	Apr 21 20:30:36 multinode-152500 kubelet[1517]: I0421 20:30:36.647198    1517 scope.go:117] "RemoveContainer" containerID="b7310952b3e31ac7a16df4a7f3267eecf905d2ee024779078b67b15e9e025d19"
	Apr 21 20:30:40 multinode-152500 kubelet[1517]: I0421 20:30:40.696021    1517 scope.go:117] "RemoveContainer" containerID="7ecc14e6d519e94a5ebe9f5ded2fecc19c0f92c7158daf2e0bb1c1e89877e650"
	Apr 21 20:30:40 multinode-152500 kubelet[1517]: E0421 20:30:40.705642    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:30:40 multinode-152500 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:30:40 multinode-152500 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:30:40 multinode-152500 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:30:40 multinode-152500 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:30:40 multinode-152500 kubelet[1517]: I0421 20:30:40.744293    1517 scope.go:117] "RemoveContainer" containerID="eb483e47dc21df12189e19b0c25ebd3c7c023bb361b6f786217043964a789d55"
	Apr 21 20:31:40 multinode-152500 kubelet[1517]: E0421 20:31:40.695727    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:31:40 multinode-152500 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:31:40 multinode-152500 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:31:40 multinode-152500 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:31:40 multinode-152500 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 21 20:32:40 multinode-152500 kubelet[1517]: E0421 20:32:40.696904    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 21 20:32:40 multinode-152500 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 21 20:32:40 multinode-152500 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 21 20:32:40 multinode-152500 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 21 20:32:40 multinode-152500 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:32:55.273209   13712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-152500 -n multinode-152500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-152500 -n multinode-152500: (12.4091566s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-152500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (449.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (1625.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (7m49.1440445s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-208700
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-208700: (36.1430435s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-208700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-208700 status --format={{.Host}}: exit status 7 (2.5398778s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:04:16.736936    9692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0421 21:05:20.162949   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 21:05:36.941155   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: (6m53.7295273s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-208700 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (332.7085ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-208700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:11:13.182879    6680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-208700
	    minikube start -p kubernetes-upgrade-208700 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2087002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-208700 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (7m23.9015077s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-208700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-208700" primary control-plane node in "kubernetes-upgrade-208700" cluster
	* Updating the running hyperv "kubernetes-upgrade-208700" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:11:13.537747    7172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0421 21:11:13.617736    7172 out.go:291] Setting OutFile to fd 1804 ...
	I0421 21:11:13.618744    7172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 21:11:13.618744    7172 out.go:304] Setting ErrFile to fd 1912...
	I0421 21:11:13.618744    7172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 21:11:13.645112    7172 out.go:298] Setting JSON to false
	I0421 21:11:13.649428    7172 start.go:129] hostinfo: {"hostname":"minikube6","uptime":19748,"bootTime":1713714124,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 21:11:13.649428    7172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 21:11:13.652416    7172 out.go:177] * [kubernetes-upgrade-208700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 21:11:13.656486    7172 notify.go:220] Checking for updates...
	I0421 21:11:13.658501    7172 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 21:11:13.660903    7172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 21:11:13.663933    7172 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 21:11:13.666967    7172 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 21:11:13.669735    7172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 21:11:13.672774    7172 config.go:182] Loaded profile config "kubernetes-upgrade-208700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 21:11:13.674122    7172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 21:11:19.949173    7172 out.go:177] * Using the hyperv driver based on existing profile
	I0421 21:11:19.955682    7172 start.go:297] selected driver: hyperv
	I0421 21:11:19.955682    7172 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-208700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:kubernetes-upgrade-208700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.193.155 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 21:11:19.955682    7172 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 21:11:20.012702    7172 cni.go:84] Creating CNI manager for ""
	I0421 21:11:20.012784    7172 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 21:11:20.012986    7172 start.go:340] cluster config:
	{Name:kubernetes-upgrade-208700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-208700 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.193.155 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 21:11:20.013333    7172 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 21:11:20.019364    7172 out.go:177] * Starting "kubernetes-upgrade-208700" primary control-plane node in "kubernetes-upgrade-208700" cluster
	I0421 21:11:20.041456    7172 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 21:11:20.042576    7172 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 21:11:20.042695    7172 cache.go:56] Caching tarball of preloaded images
	I0421 21:11:20.043051    7172 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 21:11:20.043051    7172 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 21:11:20.043051    7172 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-208700\config.json ...
	I0421 21:11:20.046570    7172 start.go:360] acquireMachinesLock for kubernetes-upgrade-208700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 21:16:05.509319    7172 start.go:364] duration metric: took 4m45.4606926s to acquireMachinesLock for "kubernetes-upgrade-208700"
	I0421 21:16:05.510012    7172 start.go:96] Skipping create...Using existing machine configuration
	I0421 21:16:05.510012    7172 fix.go:54] fixHost starting: 
	I0421 21:16:05.510786    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:07.824564    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:07.824643    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:07.824643    7172 fix.go:112] recreateIfNeeded on kubernetes-upgrade-208700: state=Running err=<nil>
	W0421 21:16:07.824742    7172 fix.go:138] unexpected machine state, will restart: <nil>
	I0421 21:16:07.828550    7172 out.go:177] * Updating the running hyperv "kubernetes-upgrade-208700" VM ...
	I0421 21:16:07.830788    7172 machine.go:94] provisionDockerMachine start ...
	I0421 21:16:07.830948    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:10.111072    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:10.111270    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:10.111363    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:12.877098    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:12.877183    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:12.883733    7172 main.go:141] libmachine: Using SSH client type: native
	I0421 21:16:12.884371    7172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.193.155 22 <nil> <nil>}
	I0421 21:16:12.884371    7172 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 21:16:13.034894    7172 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-208700
	
	I0421 21:16:13.034894    7172 buildroot.go:166] provisioning hostname "kubernetes-upgrade-208700"
	I0421 21:16:13.034894    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:15.404862    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:15.405115    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:15.405200    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:18.404711    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:18.404711    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:18.412102    7172 main.go:141] libmachine: Using SSH client type: native
	I0421 21:16:18.412102    7172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.193.155 22 <nil> <nil>}
	I0421 21:16:18.412102    7172 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-208700 && echo "kubernetes-upgrade-208700" | sudo tee /etc/hostname
	I0421 21:16:18.618742    7172 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-208700
	
	I0421 21:16:18.618742    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:20.856782    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:20.856846    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:20.856846    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:23.607451    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:23.607451    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:23.612637    7172 main.go:141] libmachine: Using SSH client type: native
	I0421 21:16:23.613403    7172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.193.155 22 <nil> <nil>}
	I0421 21:16:23.613403    7172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-208700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-208700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-208700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 21:16:23.750849    7172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 21:16:23.750849    7172 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 21:16:23.750849    7172 buildroot.go:174] setting up certificates
	I0421 21:16:23.750849    7172 provision.go:84] configureAuth start
	I0421 21:16:23.750849    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:25.998929    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:25.998929    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:25.999096    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:28.719767    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:28.771429    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:28.771543    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:31.019807    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:31.020558    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:31.020758    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:33.757545    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:33.757545    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:33.758022    7172 provision.go:143] copyHostCerts
	I0421 21:16:33.758565    7172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0421 21:16:33.758627    7172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0421 21:16:33.759106    7172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0421 21:16:33.760630    7172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0421 21:16:33.760761    7172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0421 21:16:33.761161    7172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0421 21:16:33.762920    7172 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0421 21:16:33.762920    7172 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0421 21:16:33.763011    7172 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0421 21:16:33.764391    7172 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-208700 san=[127.0.0.1 172.27.193.155 kubernetes-upgrade-208700 localhost minikube]
	I0421 21:16:33.875488    7172 provision.go:177] copyRemoteCerts
	I0421 21:16:33.888372    7172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0421 21:16:33.888372    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:36.107950    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:36.107950    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:36.107950    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:38.826156    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:38.826370    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:38.826403    7172 sshutil.go:53] new ssh client: &{IP:172.27.193.155 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-208700\id_rsa Username:docker}
	I0421 21:16:38.944940    7172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0565308s)
	I0421 21:16:38.945411    7172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0421 21:16:39.000583    7172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0421 21:16:39.058919    7172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0421 21:16:39.113748    7172 provision.go:87] duration metric: took 15.3627871s to configureAuth
	I0421 21:16:39.113748    7172 buildroot.go:189] setting minikube options for container-runtime
	I0421 21:16:39.114734    7172 config.go:182] Loaded profile config "kubernetes-upgrade-208700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 21:16:39.114734    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:41.313844    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:41.314097    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:41.314230    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:43.966660    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:43.966955    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:43.972710    7172 main.go:141] libmachine: Using SSH client type: native
	I0421 21:16:43.973572    7172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.193.155 22 <nil> <nil>}
	I0421 21:16:43.973572    7172 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0421 21:16:44.115446    7172 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0421 21:16:44.115446    7172 buildroot.go:70] root file system type: tmpfs
	I0421 21:16:44.115446    7172 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0421 21:16:44.115987    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:46.289549    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:46.289549    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:46.289549    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:48.963879    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:48.963879    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:48.970916    7172 main.go:141] libmachine: Using SSH client type: native
	I0421 21:16:48.971304    7172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.193.155 22 <nil> <nil>}
	I0421 21:16:48.971304    7172 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0421 21:16:49.149821    7172 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0421 21:16:49.149821    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:51.348001    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:51.348053    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:51.348164    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:54.029820    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:54.029820    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:54.036344    7172 main.go:141] libmachine: Using SSH client type: native
	I0421 21:16:54.037305    7172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.193.155 22 <nil> <nil>}
	I0421 21:16:54.037453    7172 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0421 21:16:54.186580    7172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 21:16:54.186580    7172 machine.go:97] duration metric: took 46.3553427s to provisionDockerMachine
	I0421 21:16:54.186580    7172 start.go:293] postStartSetup for "kubernetes-upgrade-208700" (driver="hyperv")
	I0421 21:16:54.186580    7172 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0421 21:16:54.201565    7172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0421 21:16:54.201565    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:16:56.397524    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:16:56.397524    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:56.398524    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:16:59.067918    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:16:59.068196    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:16:59.069176    7172 sshutil.go:53] new ssh client: &{IP:172.27.193.155 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-208700\id_rsa Username:docker}
	I0421 21:16:59.191696    7172 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.990095s)
	I0421 21:16:59.208282    7172 ssh_runner.go:195] Run: cat /etc/os-release
	I0421 21:16:59.216630    7172 info.go:137] Remote host: Buildroot 2023.02.9
	I0421 21:16:59.216828    7172 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0421 21:16:59.217285    7172 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0421 21:16:59.217666    7172 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem -> 138002.pem in /etc/ssl/certs
	I0421 21:16:59.234250    7172 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0421 21:16:59.256203    7172 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\138002.pem --> /etc/ssl/certs/138002.pem (1708 bytes)
	I0421 21:16:59.319379    7172 start.go:296] duration metric: took 5.1327612s for postStartSetup
	I0421 21:16:59.319379    7172 fix.go:56] duration metric: took 53.8089742s for fixHost
	I0421 21:16:59.319379    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:17:01.597716    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:17:01.598285    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:01.598520    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:17:04.450054    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:17:04.450999    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:04.460836    7172 main.go:141] libmachine: Using SSH client type: native
	I0421 21:17:04.461520    7172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.193.155 22 <nil> <nil>}
	I0421 21:17:04.461520    7172 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0421 21:17:04.605738    7172 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713734224.611667812
	
	I0421 21:17:04.605738    7172 fix.go:216] guest clock: 1713734224.611667812
	I0421 21:17:04.605738    7172 fix.go:229] Guest: 2024-04-21 21:17:04.611667812 +0000 UTC Remote: 2024-04-21 21:16:59.3193795 +0000 UTC m=+345.908534301 (delta=5.292288312s)
	I0421 21:17:04.605863    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:17:06.920927    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:17:06.921108    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:06.921108    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:17:09.759673    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:17:09.760141    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:09.765552    7172 main.go:141] libmachine: Using SSH client type: native
	I0421 21:17:09.765775    7172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.193.155 22 <nil> <nil>}
	I0421 21:17:09.765775    7172 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1713734224
	I0421 21:17:09.928122    7172 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Apr 21 21:17:04 UTC 2024
	
	I0421 21:17:09.928236    7172 fix.go:236] clock set: Sun Apr 21 21:17:04 UTC 2024
	 (err=<nil>)
	I0421 21:17:09.928236    7172 start.go:83] releasing machines lock for "kubernetes-upgrade-208700", held for 1m4.4184467s
	I0421 21:17:09.928535    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:17:12.343843    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:17:12.343843    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:12.343843    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:17:15.311173    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:17:15.311173    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:15.317420    7172 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0421 21:17:15.317420    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:17:15.334920    7172 ssh_runner.go:195] Run: cat /version.json
	I0421 21:17:15.334920    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-208700 ).state
	I0421 21:17:18.020881    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:17:18.020989    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:18.021103    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:17:18.040132    7172 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:17:18.040365    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:18.040542    7172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-208700 ).networkadapters[0]).ipaddresses[0]
	I0421 21:17:20.997787    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:17:20.998736    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:20.998736    7172 sshutil.go:53] new ssh client: &{IP:172.27.193.155 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-208700\id_rsa Username:docker}
	I0421 21:17:21.092910    7172 main.go:141] libmachine: [stdout =====>] : 172.27.193.155
	
	I0421 21:17:21.092910    7172 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:21.093737    7172 sshutil.go:53] new ssh client: &{IP:172.27.193.155 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-208700\id_rsa Username:docker}
	I0421 21:17:23.111823    7172 ssh_runner.go:235] Completed: cat /version.json: (7.776802s)
	I0421 21:17:23.111881    7172 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.7944046s)
	W0421 21:17:23.111881    7172 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0421 21:17:23.111881    7172 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0421 21:17:23.111881    7172 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0421 21:17:23.131732    7172 ssh_runner.go:195] Run: systemctl --version
	I0421 21:17:23.159389    7172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0421 21:17:23.172015    7172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0421 21:17:23.186562    7172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0421 21:17:23.225463    7172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0421 21:17:23.265956    7172 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0421 21:17:23.265956    7172 start.go:494] detecting cgroup driver to use...
	I0421 21:17:23.265956    7172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 21:17:23.324819    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0421 21:17:23.381878    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0421 21:17:23.412065    7172 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0421 21:17:23.430124    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0421 21:17:23.468129    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 21:17:23.508302    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0421 21:17:23.550210    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0421 21:17:23.594192    7172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0421 21:17:23.632603    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0421 21:17:23.673611    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0421 21:17:23.707367    7172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0421 21:17:23.745005    7172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0421 21:17:23.784309    7172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0421 21:17:23.830076    7172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 21:17:24.181489    7172 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0421 21:17:24.218390    7172 start.go:494] detecting cgroup driver to use...
	I0421 21:17:24.232336    7172 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0421 21:17:24.274052    7172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 21:17:24.319623    7172 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0421 21:17:24.420347    7172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0421 21:17:24.468857    7172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0421 21:17:24.500492    7172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0421 21:17:24.568514    7172 ssh_runner.go:195] Run: which cri-dockerd
	I0421 21:17:24.597597    7172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0421 21:17:24.617821    7172 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0421 21:17:24.683775    7172 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0421 21:17:25.029841    7172 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0421 21:17:25.373894    7172 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0421 21:17:25.373894    7172 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0421 21:17:25.446475    7172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0421 21:17:25.790866    7172 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0421 21:18:37.157884    7172 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3664967s)
	I0421 21:18:37.172425    7172 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0421 21:18:37.245600    7172 out.go:177] 
	W0421 21:18:37.250023    7172 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 21 21:09:53 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.465299213Z" level=info msg="Starting up"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.467757382Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.473110531Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.523159425Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553433468Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553503370Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553585472Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553604673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554176789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554285292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554516498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554618201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554737904Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554759305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.555319421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.556158744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559572839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559737444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559956550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559981051Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561047380Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561157483Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561177384Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563405646Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563518949Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563545450Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563564050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563580651Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563741355Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564100865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564282270Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564344072Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564363973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564381373Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564396574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564411574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564428174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564444375Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564463375Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564479376Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564494076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564517377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564533977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564550078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564633380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564707382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564733083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564747983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564770684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564787784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564807385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564822885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564837086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564851586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564869487Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564893387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564949089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564967289Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565019291Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565039891Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565052892Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565065192Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565137994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565243997Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565264598Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565621508Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565793712Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565844714Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565866314Z" level=info msg="containerd successfully booted in 0.045531s"
	Apr 21 21:09:54 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:54.548976168Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:09:54 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:54.702874086Z" level=info msg="Loading containers: start."
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.165862219Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.267885108Z" level=info msg="Loading containers: done."
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.305481698Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.306607265Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.371631745Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:09:55 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.372801311Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.262811593Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:10:24 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.265430686Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.267592580Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.267985478Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.268204978Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.358971200Z" level=info msg="Starting up"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.360951294Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.362072391Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1136
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.401400480Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.437704478Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.437963777Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438102177Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438168377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438221376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438243676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438515776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438711075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438744775Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438769275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438813675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.439179574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.442976463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443114763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443368462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443497962Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443550961Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443586561Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443611761Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444251359Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444390659Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444430959Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444465259Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444496159Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444585358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445301056Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445532956Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445715155Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445754155Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445786455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445817955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445844655Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445976355Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446013254Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446083154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446120454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446149054Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446189354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446220954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446248554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446275354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446301754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446331554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446357753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446384953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446414553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446446453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446483753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446513653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446540153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446572753Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446613153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446642853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446668753Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446907752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446951252Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446982852Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447051351Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447153751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447215951Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447242151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447601050Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447803649Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447945349Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.448015249Z" level=info msg="containerd successfully booted in 0.047865s"
	Apr 21 21:10:26 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:26.774348507Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:10:27 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:27.342588803Z" level=info msg="Loading containers: start."
	Apr 21 21:10:29 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:29.686698389Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.515785449Z" level=info msg="Loading containers: done."
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.882376415Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.882562814Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.980108339Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:10:30 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.980364938Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.568042735Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:10:44 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570100432Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570423848Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570640658Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570679460Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.656506401Z" level=info msg="Starting up"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.657732655Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.659222521Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1546
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.695816332Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725429435Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725517439Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725657345Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725681546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725718548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725735349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726012661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726115265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726138066Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726153667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726206269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726384877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730071239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730179744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730372853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730458956Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730497958Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730518759Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730531360Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731022681Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731133086Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731158187Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731703311Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731818316Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.732212234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733192777Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733669498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733788303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734077316Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734244823Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734481033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734632240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734700643Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734756446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734809248Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734920553Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735011457Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735121162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735161263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735176864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735191765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735207265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735222466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735236467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735251067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735266268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735289569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735304070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735318170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735353472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735372573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735396574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735411674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735424775Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735473377Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735491878Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735504779Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735516779Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735692787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735797491Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735816592Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736236011Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736327415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736396318Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736495922Z" level=info msg="containerd successfully booted in 0.043195s"
	Apr 21 21:10:46 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:46.711630608Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:10:47 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:47.716544083Z" level=info msg="Loading containers: start."
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.046036901Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.145326853Z" level=info msg="Loading containers: done."
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.172815136Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.173005243Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.224827197Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.225093106Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:48 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.562395547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.563115064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.564484896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.565834728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573097098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573186500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573206100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573458306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643629647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643693148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643706749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643820251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.679937596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680023098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680037498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680138800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194115602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194438109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194472709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.195037122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252469470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252696175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252786177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.255822243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322749499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322846201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322934503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.323056405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333189526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333505433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333920442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.334505554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271522948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271662350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271680250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.272702666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361029788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361677598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361797799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.362639512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397557235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397807538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397898940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.398496849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.069311519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.069841517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.070098316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.070235116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.110434787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.110814185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.111298684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.111754182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.270792571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271104370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271126570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271333969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.620076282Z" level=info msg="shim disconnected" id=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:14.621804377Z" level=info msg="ignoring event" container=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.623501671Z" level=warning msg="cleaning up after shim disconnected" id=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.623597671Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.832709596Z" level=info msg="shim disconnected" id=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.833054995Z" level=warning msg="cleaning up after shim disconnected" id=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.833199995Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:14.835321988Z" level=info msg="ignoring event" container=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.969061403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.969218802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.970049300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.970575398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.019908639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020055538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020091538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020222138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512321449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512800347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512949547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.513841744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767426424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767798522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767974722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.768200321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:19.768759422Z" level=info msg="ignoring event" container=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.770828312Z" level=info msg="shim disconnected" id=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.771324210Z" level=warning msg="cleaning up after shim disconnected" id=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.771343710Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:19.982661324Z" level=info msg="ignoring event" container=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.983638120Z" level=info msg="shim disconnected" id=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.984212217Z" level=warning msg="cleaning up after shim disconnected" id=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.984426116Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:31.311579526Z" level=info msg="ignoring event" container=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.313744616Z" level=info msg="shim disconnected" id=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.313920015Z" level=warning msg="cleaning up after shim disconnected" id=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.314127714Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.344230377Z" level=warning msg="cleanup warnings time=\"2024-04-21T21:11:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.809339152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810158649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810436548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810832946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726268185Z" level=info msg="shim disconnected" id=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726376685Z" level=warning msg="cleaning up after shim disconnected" id=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726408085Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:13:11.727542580Z" level=info msg="ignoring event" container=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003076568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003236667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003255167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003888664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:15:04.698949631Z" level=info msg="ignoring event" container=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701283821Z" level=info msg="shim disconnected" id=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb namespace=moby
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701641219Z" level=warning msg="cleaning up after shim disconnected" id=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb namespace=moby
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701798919Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.011443704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.011922702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.012046701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.012278600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:16:54.320028659Z" level=info msg="ignoring event" container=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.319844960Z" level=info msg="shim disconnected" id=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.321439753Z" level=warning msg="cleaning up after shim disconnected" id=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.321734452Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597085415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597412314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597436214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.599901804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:17:25 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:17:25 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:25.835894422Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.047361953Z" level=info msg="ignoring event" container=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.050146643Z" level=info msg="shim disconnected" id=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.050815240Z" level=warning msg="cleaning up after shim disconnected" id=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.054171728Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.108942329Z" level=info msg="ignoring event" container=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.112171117Z" level=info msg="shim disconnected" id=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.123838075Z" level=info msg="ignoring event" container=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129141755Z" level=warning msg="cleaning up after shim disconnected" id=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.120900885Z" level=info msg="shim disconnected" id=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129546754Z" level=warning msg="cleaning up after shim disconnected" id=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129774853Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129525854Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.176369584Z" level=info msg="ignoring event" container=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.184991452Z" level=info msg="ignoring event" container=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.185391351Z" level=info msg="shim disconnected" id=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.187216144Z" level=info msg="ignoring event" container=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.187336744Z" level=info msg="ignoring event" container=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.187727842Z" level=warning msg="cleaning up after shim disconnected" id=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.188756639Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.191626728Z" level=info msg="shim disconnected" id=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195357115Z" level=warning msg="cleaning up after shim disconnected" id=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195477814Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.193714020Z" level=info msg="shim disconnected" id=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195925212Z" level=warning msg="cleaning up after shim disconnected" id=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195939612Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.192302326Z" level=info msg="shim disconnected" id=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.201241093Z" level=warning msg="cleaning up after shim disconnected" id=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.201255393Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.241327847Z" level=info msg="ignoring event" container=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.242835942Z" level=info msg="shim disconnected" id=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.243161241Z" level=warning msg="cleaning up after shim disconnected" id=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.243306740Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.253571203Z" level=info msg="ignoring event" container=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.254492999Z" level=info msg="shim disconnected" id=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.267763951Z" level=warning msg="cleaning up after shim disconnected" id=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.268457649Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270042443Z" level=info msg="shim disconnected" id=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270856640Z" level=warning msg="cleaning up after shim disconnected" id=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270903640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281238402Z" level=info msg="shim disconnected" id=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281510601Z" level=warning msg="cleaning up after shim disconnected" id=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281627301Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.286935181Z" level=info msg="ignoring event" container=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.287431080Z" level=info msg="ignoring event" container=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.313201186Z" level=info msg="ignoring event" container=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313015486Z" level=info msg="shim disconnected" id=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313580184Z" level=warning msg="cleaning up after shim disconnected" id=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313784884Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:31.036222603Z" level=info msg="ignoring event" container=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.038782193Z" level=info msg="shim disconnected" id=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.039214992Z" level=warning msg="cleaning up after shim disconnected" id=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.039233492Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:35.885553560Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936627740Z" level=info msg="shim disconnected" id=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936811740Z" level=warning msg="cleaning up after shim disconnected" id=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936857940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:35.938677936Z" level=info msg="ignoring event" container=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.036620404Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.037737501Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.037881701Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.038482199Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Consumed 13.888s CPU time.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:17:37 kubernetes-upgrade-208700 dockerd[5606]: time="2024-04-21T21:17:37.134026441Z" level=info msg="Starting up"
	Apr 21 21:18:37 kubernetes-upgrade-208700 dockerd[5606]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 21 21:09:53 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.465299213Z" level=info msg="Starting up"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.467757382Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.473110531Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.523159425Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553433468Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553503370Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553585472Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553604673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554176789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554285292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554516498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554618201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554737904Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554759305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.555319421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.556158744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559572839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559737444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559956550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559981051Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561047380Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561157483Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561177384Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563405646Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563518949Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563545450Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563564050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563580651Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563741355Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564100865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564282270Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564344072Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564363973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564381373Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564396574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564411574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564428174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564444375Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564463375Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564479376Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564494076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564517377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564533977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564550078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564633380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564707382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564733083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564747983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564770684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564787784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564807385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564822885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564837086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564851586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564869487Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564893387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564949089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564967289Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565019291Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565039891Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565052892Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565065192Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565137994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565243997Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565264598Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565621508Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565793712Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565844714Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565866314Z" level=info msg="containerd successfully booted in 0.045531s"
	Apr 21 21:09:54 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:54.548976168Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:09:54 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:54.702874086Z" level=info msg="Loading containers: start."
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.165862219Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.267885108Z" level=info msg="Loading containers: done."
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.305481698Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.306607265Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.371631745Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:09:55 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.372801311Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.262811593Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:10:24 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.265430686Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.267592580Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.267985478Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.268204978Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.358971200Z" level=info msg="Starting up"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.360951294Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.362072391Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1136
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.401400480Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.437704478Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.437963777Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438102177Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438168377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438221376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438243676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438515776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438711075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438744775Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438769275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438813675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.439179574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.442976463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443114763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443368462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443497962Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443550961Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443586561Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443611761Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444251359Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444390659Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444430959Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444465259Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444496159Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444585358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445301056Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445532956Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445715155Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445754155Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445786455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445817955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445844655Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445976355Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446013254Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446083154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446120454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446149054Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446189354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446220954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446248554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446275354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446301754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446331554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446357753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446384953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446414553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446446453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446483753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446513653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446540153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446572753Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446613153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446642853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446668753Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446907752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446951252Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446982852Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447051351Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447153751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447215951Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447242151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447601050Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447803649Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447945349Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.448015249Z" level=info msg="containerd successfully booted in 0.047865s"
	Apr 21 21:10:26 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:26.774348507Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:10:27 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:27.342588803Z" level=info msg="Loading containers: start."
	Apr 21 21:10:29 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:29.686698389Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.515785449Z" level=info msg="Loading containers: done."
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.882376415Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.882562814Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.980108339Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:10:30 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.980364938Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.568042735Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:10:44 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570100432Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570423848Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570640658Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570679460Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.656506401Z" level=info msg="Starting up"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.657732655Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.659222521Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1546
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.695816332Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725429435Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725517439Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725657345Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725681546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725718548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725735349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726012661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726115265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726138066Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726153667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726206269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726384877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730071239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730179744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730372853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730458956Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730497958Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730518759Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730531360Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731022681Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731133086Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731158187Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731703311Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731818316Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.732212234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733192777Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733669498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733788303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734077316Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734244823Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734481033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734632240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734700643Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734756446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734809248Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734920553Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735011457Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735121162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735161263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735176864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735191765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735207265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735222466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735236467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735251067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735266268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735289569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735304070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735318170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735353472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735372573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735396574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735411674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735424775Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735473377Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735491878Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735504779Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735516779Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735692787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735797491Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735816592Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736236011Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736327415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736396318Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736495922Z" level=info msg="containerd successfully booted in 0.043195s"
	Apr 21 21:10:46 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:46.711630608Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:10:47 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:47.716544083Z" level=info msg="Loading containers: start."
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.046036901Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.145326853Z" level=info msg="Loading containers: done."
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.172815136Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.173005243Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.224827197Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.225093106Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:48 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.562395547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.563115064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.564484896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.565834728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573097098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573186500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573206100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573458306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643629647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643693148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643706749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643820251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.679937596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680023098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680037498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680138800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194115602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194438109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194472709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.195037122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252469470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252696175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252786177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.255822243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322749499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322846201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322934503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.323056405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333189526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333505433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333920442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.334505554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271522948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271662350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271680250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.272702666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361029788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361677598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361797799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.362639512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397557235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397807538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397898940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.398496849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.069311519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.069841517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.070098316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.070235116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.110434787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.110814185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.111298684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.111754182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.270792571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271104370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271126570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271333969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.620076282Z" level=info msg="shim disconnected" id=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:14.621804377Z" level=info msg="ignoring event" container=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.623501671Z" level=warning msg="cleaning up after shim disconnected" id=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.623597671Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.832709596Z" level=info msg="shim disconnected" id=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.833054995Z" level=warning msg="cleaning up after shim disconnected" id=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.833199995Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:14.835321988Z" level=info msg="ignoring event" container=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.969061403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.969218802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.970049300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.970575398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.019908639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020055538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020091538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020222138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512321449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512800347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512949547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.513841744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767426424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767798522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767974722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.768200321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:19.768759422Z" level=info msg="ignoring event" container=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.770828312Z" level=info msg="shim disconnected" id=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.771324210Z" level=warning msg="cleaning up after shim disconnected" id=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.771343710Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:19.982661324Z" level=info msg="ignoring event" container=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.983638120Z" level=info msg="shim disconnected" id=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.984212217Z" level=warning msg="cleaning up after shim disconnected" id=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.984426116Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:31.311579526Z" level=info msg="ignoring event" container=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.313744616Z" level=info msg="shim disconnected" id=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.313920015Z" level=warning msg="cleaning up after shim disconnected" id=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.314127714Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.344230377Z" level=warning msg="cleanup warnings time=\"2024-04-21T21:11:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.809339152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810158649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810436548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810832946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726268185Z" level=info msg="shim disconnected" id=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726376685Z" level=warning msg="cleaning up after shim disconnected" id=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726408085Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:13:11.727542580Z" level=info msg="ignoring event" container=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003076568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003236667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003255167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003888664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:15:04.698949631Z" level=info msg="ignoring event" container=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701283821Z" level=info msg="shim disconnected" id=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb namespace=moby
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701641219Z" level=warning msg="cleaning up after shim disconnected" id=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb namespace=moby
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701798919Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.011443704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.011922702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.012046701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.012278600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:16:54.320028659Z" level=info msg="ignoring event" container=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.319844960Z" level=info msg="shim disconnected" id=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.321439753Z" level=warning msg="cleaning up after shim disconnected" id=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.321734452Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597085415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597412314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597436214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.599901804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:17:25 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:17:25 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:25.835894422Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.047361953Z" level=info msg="ignoring event" container=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.050146643Z" level=info msg="shim disconnected" id=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.050815240Z" level=warning msg="cleaning up after shim disconnected" id=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.054171728Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.108942329Z" level=info msg="ignoring event" container=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.112171117Z" level=info msg="shim disconnected" id=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.123838075Z" level=info msg="ignoring event" container=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129141755Z" level=warning msg="cleaning up after shim disconnected" id=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.120900885Z" level=info msg="shim disconnected" id=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129546754Z" level=warning msg="cleaning up after shim disconnected" id=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129774853Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129525854Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.176369584Z" level=info msg="ignoring event" container=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.184991452Z" level=info msg="ignoring event" container=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.185391351Z" level=info msg="shim disconnected" id=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.187216144Z" level=info msg="ignoring event" container=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.187336744Z" level=info msg="ignoring event" container=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.187727842Z" level=warning msg="cleaning up after shim disconnected" id=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.188756639Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.191626728Z" level=info msg="shim disconnected" id=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195357115Z" level=warning msg="cleaning up after shim disconnected" id=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195477814Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.193714020Z" level=info msg="shim disconnected" id=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195925212Z" level=warning msg="cleaning up after shim disconnected" id=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195939612Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.192302326Z" level=info msg="shim disconnected" id=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.201241093Z" level=warning msg="cleaning up after shim disconnected" id=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.201255393Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.241327847Z" level=info msg="ignoring event" container=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.242835942Z" level=info msg="shim disconnected" id=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.243161241Z" level=warning msg="cleaning up after shim disconnected" id=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.243306740Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.253571203Z" level=info msg="ignoring event" container=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.254492999Z" level=info msg="shim disconnected" id=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.267763951Z" level=warning msg="cleaning up after shim disconnected" id=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.268457649Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270042443Z" level=info msg="shim disconnected" id=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270856640Z" level=warning msg="cleaning up after shim disconnected" id=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270903640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281238402Z" level=info msg="shim disconnected" id=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281510601Z" level=warning msg="cleaning up after shim disconnected" id=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281627301Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.286935181Z" level=info msg="ignoring event" container=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.287431080Z" level=info msg="ignoring event" container=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.313201186Z" level=info msg="ignoring event" container=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313015486Z" level=info msg="shim disconnected" id=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313580184Z" level=warning msg="cleaning up after shim disconnected" id=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313784884Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:31.036222603Z" level=info msg="ignoring event" container=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.038782193Z" level=info msg="shim disconnected" id=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.039214992Z" level=warning msg="cleaning up after shim disconnected" id=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.039233492Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:35.885553560Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936627740Z" level=info msg="shim disconnected" id=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936811740Z" level=warning msg="cleaning up after shim disconnected" id=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936857940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:35.938677936Z" level=info msg="ignoring event" container=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.036620404Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.037737501Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.037881701Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.038482199Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Consumed 13.888s CPU time.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:17:37 kubernetes-upgrade-208700 dockerd[5606]: time="2024-04-21T21:17:37.134026441Z" level=info msg="Starting up"
	Apr 21 21:18:37 kubernetes-upgrade-208700 dockerd[5606]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0421 21:18:37.250710    7172 out.go:239] * 
	* 
	W0421 21:18:37.252222    7172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 21:18:37.256909    7172 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-208700 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-21 21:18:37.7147576 +0000 UTC m=+10530.070326101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-208700 -n kubernetes-upgrade-208700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-208700 -n kubernetes-upgrade-208700: exit status 2 (12.5441378s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:18:37.858264   10204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-208700 logs -n 25
E0421 21:20:36.953319   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-208700 logs -n 25: (2m48.0479007s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-149100          | force-systemd-flag-149100 | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:56 UTC | 21 Apr 24 20:57 UTC |
	| start   | -p cert-expiration-104900             | cert-expiration-104900    | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:57 UTC | 21 Apr 24 21:05 UTC |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-043400             | running-upgrade-043400    | minikube6\jenkins | v1.33.0 | 21 Apr 24 20:58 UTC | 21 Apr 24 21:07 UTC |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-214100              | force-systemd-env-214100  | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:01 UTC | 21 Apr 24 21:01 UTC |
	|         | ssh docker info --format              |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-214100           | force-systemd-env-214100  | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:01 UTC | 21 Apr 24 21:02 UTC |
	| start   | -p docker-flags-064200                | docker-flags-064200       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:02 UTC | 21 Apr 24 21:09 UTC |
	|         | --cache-images=false                  |                           |                   |         |                     |                     |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --install-addons=false                |                           |                   |         |                     |                     |
	|         | --wait=false                          |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                    |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-208700          | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:03 UTC | 21 Apr 24 21:04 UTC |
	| start   | -p kubernetes-upgrade-208700          | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:04 UTC | 21 Apr 24 21:11 UTC |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-043400             | running-upgrade-043400    | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:07 UTC | 21 Apr 24 21:09 UTC |
	| start   | -p cert-expiration-104900             | cert-expiration-104900    | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:08 UTC |                     |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p cert-options-338400                | cert-options-338400       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:09 UTC | 21 Apr 24 21:14 UTC |
	|         | --memory=2048                         |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | docker-flags-064200 ssh               | docker-flags-064200       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:09 UTC | 21 Apr 24 21:09 UTC |
	|         | sudo systemctl show docker            |                           |                   |         |                     |                     |
	|         | --property=Environment                |                           |                   |         |                     |                     |
	|         | --no-pager                            |                           |                   |         |                     |                     |
	| ssh     | docker-flags-064200 ssh               | docker-flags-064200       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:09 UTC | 21 Apr 24 21:09 UTC |
	|         | sudo systemctl show docker            |                           |                   |         |                     |                     |
	|         | --property=ExecStart                  |                           |                   |         |                     |                     |
	|         | --no-pager                            |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-064200                | docker-flags-064200       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:09 UTC | 21 Apr 24 21:10 UTC |
	| start   | -p stopped-upgrade-603200             | minikube                  | minikube6\jenkins | v1.26.0 | 21 Apr 24 21:10 GMT | 21 Apr 24 21:17 GMT |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv                    |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-208700          | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:11 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-208700          | kubernetes-upgrade-208700 | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:11 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| ssh     | cert-options-338400 ssh               | cert-options-338400       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:14 UTC | 21 Apr 24 21:14 UTC |
	|         | openssl x509 -text -noout -in         |                           |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |                   |         |                     |                     |
	| ssh     | -p cert-options-338400 -- sudo        | cert-options-338400       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:14 UTC | 21 Apr 24 21:15 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |                   |         |                     |                     |
	| delete  | -p cert-expiration-104900             | cert-expiration-104900    | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:14 UTC | 21 Apr 24 21:16 UTC |
	| delete  | -p cert-options-338400                | cert-options-338400       | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:15 UTC | 21 Apr 24 21:15 UTC |
	| start   | -p pause-341900 --memory=2048         | pause-341900              | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:15 UTC |                     |
	|         | --install-addons=false                |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv            |                           |                   |         |                     |                     |
	| start   | -p auto-190300 --memory=3072          | auto-190300               | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:16 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-603200 stop           | minikube                  | minikube6\jenkins | v1.26.0 | 21 Apr 24 21:17 GMT | 21 Apr 24 21:17 GMT |
	| start   | -p stopped-upgrade-603200             | stopped-upgrade-603200    | minikube6\jenkins | v1.33.0 | 21 Apr 24 21:17 UTC |                     |
	|         | --memory=2200                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                       |                           |                   |         |                     |                     |
	|---------|---------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 21:17:54
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 21:17:54.741313    6892 out.go:291] Setting OutFile to fd 1780 ...
	I0421 21:17:54.742274    6892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 21:17:54.742274    6892 out.go:304] Setting ErrFile to fd 916...
	I0421 21:17:54.742274    6892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 21:17:54.768977    6892 out.go:298] Setting JSON to false
	I0421 21:17:54.775980    6892 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20149,"bootTime":1713714124,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 21:17:54.775980    6892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 21:17:54.963096    6892 out.go:177] * [stopped-upgrade-603200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 21:17:50.994910   13880 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:17:50.994910   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:50.994910   13880 main.go:141] libmachine: Starting VM...
	I0421 21:17:50.995238   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM pause-341900
	I0421 21:17:54.979868    6892 notify.go:220] Checking for updates...
	I0421 21:17:55.138051    6892 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 21:17:55.325686    6892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 21:17:55.713218    6892 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 21:17:55.900512    6892 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 21:17:56.030063    6892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 21:17:56.071191    6892 config.go:182] Loaded profile config "stopped-upgrade-603200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0421 21:17:56.185698    6892 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0421 21:17:56.286100    6892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 21:17:58.175366   13880 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:17:58.175366   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:17:58.175366   13880 main.go:141] libmachine: Waiting for host to start...
	I0421 21:17:58.175441   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:01.943308    6892 out.go:177] * Using the hyperv driver based on existing profile
	I0421 21:18:01.948301    6892 start.go:297] selected driver: hyperv
	I0421 21:18:01.948882    6892 start.go:901] validating driver "hyperv" against &{Name:stopped-upgrade-603200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade
-603200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.202.117 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0421 21:18:01.949104    6892 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0421 21:18:02.015753    6892 cni.go:84] Creating CNI manager for ""
	I0421 21:18:02.016678    6892 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 21:18:02.016678    6892 start.go:340] cluster config:
	{Name:stopped-upgrade-603200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-603200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.27.202.117 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0421 21:18:02.016678    6892 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 21:18:02.164760    6892 out.go:177] * Starting "stopped-upgrade-603200" primary control-plane node in "stopped-upgrade-603200" cluster
	I0421 21:18:02.218244    6892 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0421 21:18:02.219392    6892 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4
	I0421 21:18:02.219392    6892 cache.go:56] Caching tarball of preloaded images
	I0421 21:18:02.219392    6892 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0421 21:18:02.219938    6892 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0421 21:18:02.220239    6892 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\stopped-upgrade-603200\config.json ...
	I0421 21:18:02.223753    6892 start.go:360] acquireMachinesLock for stopped-upgrade-603200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0421 21:18:00.882675   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:00.882675   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:00.882736   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:03.508161   13880 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:18:03.508161   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:04.515404   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:06.721138   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:06.721138   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:06.721138   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:09.284464   13880 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:18:09.284464   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:10.298939   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:12.515302   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:12.515302   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:12.515527   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:15.123322   13880 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:18:15.123322   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:16.133842   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:18.363587   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:18.363773   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:18.363773   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:20.956080   13880 main.go:141] libmachine: [stdout =====>] : 
	I0421 21:18:20.956080   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:21.963941   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:24.209435   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:24.209435   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:24.210207   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:26.982094   13880 main.go:141] libmachine: [stdout =====>] : 172.27.200.109
	
	I0421 21:18:26.982304   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:26.982304   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:29.170122   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:29.170122   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:29.170845   13880 machine.go:94] provisionDockerMachine start ...
	I0421 21:18:29.170931   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:31.395009   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:31.395284   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:31.395384   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:34.031570   13880 main.go:141] libmachine: [stdout =====>] : 172.27.200.109
	
	I0421 21:18:34.031570   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:34.039450   13880 main.go:141] libmachine: Using SSH client type: native
	I0421 21:18:34.039450   13880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.200.109 22 <nil> <nil>}
	I0421 21:18:34.039983   13880 main.go:141] libmachine: About to run SSH command:
	hostname
	I0421 21:18:34.181529   13880 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0421 21:18:34.181529   13880 buildroot.go:166] provisioning hostname "pause-341900"
	I0421 21:18:34.181529   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:37.157884    7172 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3664967s)
	I0421 21:18:37.172425    7172 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0421 21:18:37.245600    7172 out.go:177] 
	W0421 21:18:37.250023    7172 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 21 21:09:53 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.465299213Z" level=info msg="Starting up"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.467757382Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:53.473110531Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.523159425Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553433468Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553503370Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553585472Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.553604673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554176789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554285292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554516498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554618201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554737904Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.554759305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.555319421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.556158744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559572839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559737444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559956550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.559981051Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561047380Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561157483Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.561177384Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563405646Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563518949Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563545450Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563564050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563580651Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.563741355Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564100865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564282270Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564344072Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564363973Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564381373Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564396574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564411574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564428174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564444375Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564463375Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564479376Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564494076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564517377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564533977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564550078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564633380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564707382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564733083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564747983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564770684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564787784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564807385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564822885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564837086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564851586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564869487Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564893387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564949089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.564967289Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565019291Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565039891Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565052892Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565065192Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565137994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565243997Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565264598Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565621508Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565793712Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565844714Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:09:53 kubernetes-upgrade-208700 dockerd[658]: time="2024-04-21T21:09:53.565866314Z" level=info msg="containerd successfully booted in 0.045531s"
	Apr 21 21:09:54 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:54.548976168Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:09:54 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:54.702874086Z" level=info msg="Loading containers: start."
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.165862219Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.267885108Z" level=info msg="Loading containers: done."
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.305481698Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.306607265Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.371631745Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:09:55 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:09:55 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:09:55.372801311Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.262811593Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:10:24 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.265430686Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.267592580Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.267985478Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:10:24 kubernetes-upgrade-208700 dockerd[652]: time="2024-04-21T21:10:24.268204978Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:10:25 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.358971200Z" level=info msg="Starting up"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.360951294Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:25.362072391Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1136
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.401400480Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.437704478Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.437963777Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438102177Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438168377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438221376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438243676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438515776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438711075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438744775Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438769275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.438813675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.439179574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.442976463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443114763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443368462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443497962Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443550961Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443586561Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.443611761Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444251359Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444390659Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444430959Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444465259Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444496159Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.444585358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445301056Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445532956Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445715155Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445754155Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445786455Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445817955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445844655Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.445976355Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446013254Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446083154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446120454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446149054Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446189354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446220954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446248554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446275354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446301754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446331554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446357753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446384953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446414553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446446453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446483753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446513653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446540153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446572753Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446613153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446642853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446668753Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446907752Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446951252Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.446982852Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447051351Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447153751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447215951Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447242151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447601050Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447803649Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.447945349Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:10:25 kubernetes-upgrade-208700 dockerd[1136]: time="2024-04-21T21:10:25.448015249Z" level=info msg="containerd successfully booted in 0.047865s"
	Apr 21 21:10:26 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:26.774348507Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:10:27 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:27.342588803Z" level=info msg="Loading containers: start."
	Apr 21 21:10:29 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:29.686698389Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.515785449Z" level=info msg="Loading containers: done."
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.882376415Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.882562814Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.980108339Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:10:30 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:10:30 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:30.980364938Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.568042735Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:10:44 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570100432Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570423848Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570640658Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:10:44 kubernetes-upgrade-208700 dockerd[1129]: time="2024-04-21T21:10:44.570679460Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:10:45 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.656506401Z" level=info msg="Starting up"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.657732655Z" level=info msg="containerd not running, starting managed containerd"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:45.659222521Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1546
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.695816332Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725429435Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725517439Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725657345Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725681546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725718548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.725735349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726012661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726115265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726138066Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726153667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726206269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.726384877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730071239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730179744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730372853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730458956Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730497958Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730518759Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.730531360Z" level=info msg="metadata content store policy set" policy=shared
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731022681Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731133086Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731158187Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731703311Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.731818316Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.732212234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733192777Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733669498Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.733788303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734077316Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734244823Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734481033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734632240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734700643Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734756446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734809248Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.734920553Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735011457Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735121162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735161263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735176864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735191765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735207265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735222466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735236467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735251067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735266268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735289569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735304070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735318170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735353472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735372573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735396574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735411674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735424775Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735473377Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735491878Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735504779Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735516779Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735692787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735797491Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.735816592Z" level=info msg="NRI interface is disabled by configuration."
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736236011Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736327415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736396318Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 21 21:10:45 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:45.736495922Z" level=info msg="containerd successfully booted in 0.043195s"
	Apr 21 21:10:46 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:46.711630608Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 21 21:10:47 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:47.716544083Z" level=info msg="Loading containers: start."
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.046036901Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.145326853Z" level=info msg="Loading containers: done."
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.172815136Z" level=info msg="Docker daemon" commit=60b9add7 containerd-snapshotter=false storage-driver=overlay2 version=26.0.1
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.173005243Z" level=info msg="Daemon has completed initialization"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.224827197Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 21 21:10:48 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:10:48.225093106Z" level=info msg="API listen on [::]:2376"
	Apr 21 21:10:48 kubernetes-upgrade-208700 systemd[1]: Started Docker Application Container Engine.
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.562395547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.563115064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.564484896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.565834728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573097098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573186500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573206100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.573458306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643629647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643693148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643706749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.643820251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.679937596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680023098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680037498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:54.680138800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194115602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194438109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.194472709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.195037122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252469470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252696175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.252786177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.255822243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322749499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322846201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.322934503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.323056405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333189526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333505433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.333920442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:10:55 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:10:55.334505554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271522948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271662350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.271680250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.272702666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361029788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361677598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.361797799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.362639512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397557235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397807538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.397898940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:00 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:00.398496849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.069311519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.069841517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.070098316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.070235116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.110434787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.110814185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.111298684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.111754182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.270792571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271104370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271126570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:01 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:01.271333969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.620076282Z" level=info msg="shim disconnected" id=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:14.621804377Z" level=info msg="ignoring event" container=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.623501671Z" level=warning msg="cleaning up after shim disconnected" id=569243f0f22a1fded1a96393fc320c26284c9c5ad39fc1d22b2504b664de903e namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.623597671Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.832709596Z" level=info msg="shim disconnected" id=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.833054995Z" level=warning msg="cleaning up after shim disconnected" id=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:14.833199995Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:14 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:14.835321988Z" level=info msg="ignoring event" container=5df0a46e0269cc79093bdfc8c06f4cbb6ee2cdd3e883d0d452446c9593f34655 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.969061403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.969218802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.970049300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:16 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:16.970575398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.019908639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020055538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020091538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.020222138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512321449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512800347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.512949547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.513841744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767426424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767798522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.767974722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:17 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:17.768200321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:19.768759422Z" level=info msg="ignoring event" container=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.770828312Z" level=info msg="shim disconnected" id=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.771324210Z" level=warning msg="cleaning up after shim disconnected" id=93b6f827bfa4f0c4618afc3475711f5d89b8f71675a35f54d45fdcb9ea31a6bb namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.771343710Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:19.982661324Z" level=info msg="ignoring event" container=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.983638120Z" level=info msg="shim disconnected" id=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.984212217Z" level=warning msg="cleaning up after shim disconnected" id=a929511ce5744eebe00fab8b049a23cf6796bd76c86738dcceca26db028c50dc namespace=moby
	Apr 21 21:11:19 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:19.984426116Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:11:31.311579526Z" level=info msg="ignoring event" container=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.313744616Z" level=info msg="shim disconnected" id=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.313920015Z" level=warning msg="cleaning up after shim disconnected" id=9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28 namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.314127714Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:11:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:31.344230377Z" level=warning msg="cleanup warnings time=\"2024-04-21T21:11:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.809339152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810158649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810436548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:11:43 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:11:43.810832946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726268185Z" level=info msg="shim disconnected" id=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726376685Z" level=warning msg="cleaning up after shim disconnected" id=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:11.726408085Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:13:11 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:13:11.727542580Z" level=info msg="ignoring event" container=6fecbf33249e06d2d9ddd80a8f3d20e06e2827369f6132d7226f3fbf685414dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003076568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003236667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003255167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:13:12 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:13:12.003888664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:15:04.698949631Z" level=info msg="ignoring event" container=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701283821Z" level=info msg="shim disconnected" id=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb namespace=moby
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701641219Z" level=warning msg="cleaning up after shim disconnected" id=bb670d53be4cd430255008011d4ee07786c6af6c323a76b5f0310317e810a3cb namespace=moby
	Apr 21 21:15:04 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:04.701798919Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.011443704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.011922702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.012046701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:15:05 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:15:05.012278600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:16:54.320028659Z" level=info msg="ignoring event" container=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.319844960Z" level=info msg="shim disconnected" id=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.321439753Z" level=warning msg="cleaning up after shim disconnected" id=1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7 namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.321734452Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597085415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597412314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.597436214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:16:54 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:16:54.599901804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 21 21:17:25 kubernetes-upgrade-208700 systemd[1]: Stopping Docker Application Container Engine...
	Apr 21 21:17:25 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:25.835894422Z" level=info msg="Processing signal 'terminated'"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.047361953Z" level=info msg="ignoring event" container=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.050146643Z" level=info msg="shim disconnected" id=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.050815240Z" level=warning msg="cleaning up after shim disconnected" id=1494be5a5eaa18aa0c1c574438c3d2c407a8b0792737ccef9e9dfe7279411782 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.054171728Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.108942329Z" level=info msg="ignoring event" container=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.112171117Z" level=info msg="shim disconnected" id=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.123838075Z" level=info msg="ignoring event" container=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129141755Z" level=warning msg="cleaning up after shim disconnected" id=0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.120900885Z" level=info msg="shim disconnected" id=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129546754Z" level=warning msg="cleaning up after shim disconnected" id=22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129774853Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.129525854Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.176369584Z" level=info msg="ignoring event" container=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.184991452Z" level=info msg="ignoring event" container=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.185391351Z" level=info msg="shim disconnected" id=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.187216144Z" level=info msg="ignoring event" container=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.187336744Z" level=info msg="ignoring event" container=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.187727842Z" level=warning msg="cleaning up after shim disconnected" id=a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.188756639Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.191626728Z" level=info msg="shim disconnected" id=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195357115Z" level=warning msg="cleaning up after shim disconnected" id=74d3fd41f91df4d2a73df7bdc8a8af896d1a77bfa8d947a16f7a9100d0f2d27d namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195477814Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.193714020Z" level=info msg="shim disconnected" id=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195925212Z" level=warning msg="cleaning up after shim disconnected" id=c55852f348f19111c2f002dbadc03a0d64b7e4be82496304f940935049e075d3 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.195939612Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.192302326Z" level=info msg="shim disconnected" id=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.201241093Z" level=warning msg="cleaning up after shim disconnected" id=8dd084b1540f176bbd8364b981fa22cfe93cb5f307428e64e053145b888b2493 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.201255393Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.241327847Z" level=info msg="ignoring event" container=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.242835942Z" level=info msg="shim disconnected" id=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.243161241Z" level=warning msg="cleaning up after shim disconnected" id=95b81f8eec635a450a38c7856c45b762a8055c7b53a08bae1a7ab3b45f0d510a namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.243306740Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.253571203Z" level=info msg="ignoring event" container=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.254492999Z" level=info msg="shim disconnected" id=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.267763951Z" level=warning msg="cleaning up after shim disconnected" id=7029c641f658e4d277ff272ab2dffdd793984eba9e05f9a5a8151400d93c9b96 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.268457649Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270042443Z" level=info msg="shim disconnected" id=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270856640Z" level=warning msg="cleaning up after shim disconnected" id=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.270903640Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281238402Z" level=info msg="shim disconnected" id=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281510601Z" level=warning msg="cleaning up after shim disconnected" id=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.281627301Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.286935181Z" level=info msg="ignoring event" container=84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.287431080Z" level=info msg="ignoring event" container=b745f48a68d5af5f44e3e3c12268c4766a281ece07a365c3c31e3dfc5e5fa331 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:26.313201186Z" level=info msg="ignoring event" container=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313015486Z" level=info msg="shim disconnected" id=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313580184Z" level=warning msg="cleaning up after shim disconnected" id=347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4 namespace=moby
	Apr 21 21:17:26 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:26.313784884Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:31.036222603Z" level=info msg="ignoring event" container=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.038782193Z" level=info msg="shim disconnected" id=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.039214992Z" level=warning msg="cleaning up after shim disconnected" id=583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0 namespace=moby
	Apr 21 21:17:31 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:31.039233492Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:35.885553560Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936627740Z" level=info msg="shim disconnected" id=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936811740Z" level=warning msg="cleaning up after shim disconnected" id=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1546]: time="2024-04-21T21:17:35.936857940Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 21 21:17:35 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:35.938677936Z" level=info msg="ignoring event" container=cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.036620404Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.037737501Z" level=info msg="Daemon shutdown complete"
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.037881701Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 21 21:17:36 kubernetes-upgrade-208700 dockerd[1539]: time="2024-04-21T21:17:36.038482199Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Deactivated successfully.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Consumed 13.888s CPU time.
	Apr 21 21:17:37 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	Apr 21 21:17:37 kubernetes-upgrade-208700 dockerd[5606]: time="2024-04-21T21:17:37.134026441Z" level=info msg="Starting up"
	Apr 21 21:18:37 kubernetes-upgrade-208700 dockerd[5606]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 21 21:18:37 kubernetes-upgrade-208700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0421 21:18:37.250710    7172 out.go:239] * 
	W0421 21:18:37.252222    7172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0421 21:18:37.256909    7172 out.go:177] 
	I0421 21:18:36.418626   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:36.418626   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:36.418626   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:39.150188   13880 main.go:141] libmachine: [stdout =====>] : 172.27.200.109
	
	I0421 21:18:39.150188   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:39.155308   13880 main.go:141] libmachine: Using SSH client type: native
	I0421 21:18:39.155958   13880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.200.109 22 <nil> <nil>}
	I0421 21:18:39.155958   13880 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-341900 && echo "pause-341900" | sudo tee /etc/hostname
	I0421 21:18:39.323132   13880 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-341900
	
	I0421 21:18:39.323132   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:41.550889   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:41.550889   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:41.551774   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:44.237109   13880 main.go:141] libmachine: [stdout =====>] : 172.27.200.109
	
	I0421 21:18:44.237109   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:44.244673   13880 main.go:141] libmachine: Using SSH client type: native
	I0421 21:18:44.245329   13880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ca1c0] 0x13ccda0 <nil>  [] 0s} 172.27.200.109 22 <nil> <nil>}
	I0421 21:18:44.245329   13880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-341900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-341900/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-341900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0421 21:18:44.408197   13880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0421 21:18:44.408197   13880 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0421 21:18:44.408197   13880 buildroot.go:174] setting up certificates
	I0421 21:18:44.408197   13880 provision.go:84] configureAuth start
	I0421 21:18:44.408197   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	I0421 21:18:46.643967   13880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 21:18:46.643967   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:46.644257   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-341900 ).networkadapters[0]).ipaddresses[0]
	I0421 21:18:49.360300   13880 main.go:141] libmachine: [stdout =====>] : 172.27.200.109
	
	I0421 21:18:49.360300   13880 main.go:141] libmachine: [stderr =====>] : 
	I0421 21:18:49.360300   13880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-341900 ).state
	
	
	==> Docker <==
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID '347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '347c375793b57510c89ac434eb480fbfe2270839d7d99ee75095aa3dea5fe8d4'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID '84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '84a826857839c582016943806040d9ee7fede23ace370c235c3d3c2b31c33e10'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID 'cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cca898a5579f0bbf9be4f63c244d197fe33f7ba29c15bf673fbdbce347727709'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID '0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0feb2d4b220e894703882d126d7ff5d8bc2664fbf808cf881ba7ce5f7f9f6971'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID '9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9683646c9850eb02a896dcce277b2c8892870be20ef09eabc087c21653a7ce28'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID '22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '22767de8763c3d8e86a4720d831c0e9f3146f687073c41f1d85dec71b78f5a59'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID '1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1f4eab815a5880c60bf0513ebcbf8b32e55583f913d23a8ddcc141da0f499dc7'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID '583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '583a5e448381e3f02f3f37766b6cc1ff016bb723f0da780bf0b12806d75e6bf0'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 21 21:20:37 kubernetes-upgrade-208700 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="error getting RW layer size for container ID 'a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:20:37 kubernetes-upgrade-208700 cri-dockerd[1350]: time="2024-04-21T21:20:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a83660ca2c9db7c7e2880c7d6faa68fc1e8f7eb938eabdd3f9605fb16540bb00'"
	Apr 21 21:20:37 kubernetes-upgrade-208700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 21 21:20:37 kubernetes-upgrade-208700 systemd[1]: Stopped Docker Application Container Engine.
	Apr 21 21:20:37 kubernetes-upgrade-208700 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-21T21:20:39Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr21 21:10] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.116434] kauditd_printk_skb: 73 callbacks suppressed
	[  +1.278801] systemd-fstab-generator[1095]: Ignoring "noauto" option for root device
	[  +0.248132] systemd-fstab-generator[1107]: Ignoring "noauto" option for root device
	[  +0.280412] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +4.683696] kauditd_printk_skb: 78 callbacks suppressed
	[  +2.415102] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.249097] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.263849] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.372053] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[ +12.264170] systemd-fstab-generator[1531]: Ignoring "noauto" option for root device
	[  +0.126155] kauditd_printk_skb: 117 callbacks suppressed
	[  +4.230590] systemd-fstab-generator[1759]: Ignoring "noauto" option for root device
	[  +4.429579] systemd-fstab-generator[1921]: Ignoring "noauto" option for root device
	[  +0.113431] kauditd_printk_skb: 73 callbacks suppressed
	[Apr21 21:11] kauditd_printk_skb: 62 callbacks suppressed
	[  +2.013142] systemd-fstab-generator[2742]: Ignoring "noauto" option for root device
	[ +12.230217] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.147999] kauditd_printk_skb: 40 callbacks suppressed
	[Apr21 21:13] hrtimer: interrupt took 1817092 ns
	[Apr21 21:17] systemd-fstab-generator[5139]: Ignoring "noauto" option for root device
	[  +0.859260] systemd-fstab-generator[5177]: Ignoring "noauto" option for root device
	[  +0.325003] systemd-fstab-generator[5189]: Ignoring "noauto" option for root device
	[  +0.433110] systemd-fstab-generator[5203]: Ignoring "noauto" option for root device
	[  +5.449096] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 21:21:38 up 12 min,  0 users,  load average: 0.00, 0.15, 0.15
	Linux kubernetes-upgrade-208700 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 21 21:21:32 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:32.454746    1928 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-208700\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-208700?timeout=10s\": dial tcp 172.27.193.155:8443: connect: connection refused"
	Apr 21 21:21:32 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:32.456186    1928 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-208700\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-208700?timeout=10s\": dial tcp 172.27.193.155:8443: connect: connection refused"
	Apr 21 21:21:32 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:32.456355    1928 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Apr 21 21:21:33 kubernetes-upgrade-208700 kubelet[1928]: I0421 21:21:33.566672    1928 status_manager.go:853] "Failed to get status for pod" podUID="83f99bb629eb08619c47a36321944f56" pod="kube-system/kube-apiserver-kubernetes-upgrade-208700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-208700\": dial tcp 172.27.193.155:8443: connect: connection refused"
	Apr 21 21:21:35 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:35.913529    1928 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m10.106352298s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.906137    1928 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-upgrade-208700.17c86837a1a94f7a\": dial tcp 172.27.193.155:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-kubernetes-upgrade-208700.17c86837a1a94f7a  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-kubernetes-upgrade-208700,UID:83f99bb629eb08619c47a36321944f56,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.27.193.155:8443/readyz\": dial tcp 172.27.193.155:8443: connect: connection refused,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-208700,},FirstTimestamp:2024-04-21 21:17:26.359191
418 +0000 UTC m=+393.110060606,LastTimestamp:2024-04-21 21:17:28.355572455 +0000 UTC m=+395.106441743,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-208700,}"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.972976    1928 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.973317    1928 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.973547    1928 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.974365    1928 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.974451    1928 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.974501    1928 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.975091    1928 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.975126    1928 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.975152    1928 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.975174    1928 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: I0421 21:21:37.975236    1928 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.975439    1928 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.975686    1928 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: I0421 21:21:37.975710    1928 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.974587    1928 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.978009    1928 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.978279    1928 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.978333    1928 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 21 21:21:37 kubernetes-upgrade-208700 kubelet[1928]: E0421 21:21:37.978753    1928 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:18:50.405581    1640 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0421 21:19:37.424473    1640 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:19:37.458733    1640 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:19:37.498252    1640 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:19:37.539251    1640 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:20:37.695753    1640 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:20:37.733636    1640 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:20:37.776840    1640 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0421 21:20:37.816490    1640 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-208700 -n kubernetes-upgrade-208700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-208700 -n kubernetes-upgrade-208700: exit status 2 (12.7496658s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 21:21:38.669476    5548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-208700" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-208700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-208700
E0421 21:22:00.399385   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 21:22:54.278449   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-208700: (1m5.0573299s)
--- FAIL: TestKubernetesUpgrade (1625.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-043400 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-043400 --driver=hyperv: exit status 1 (4m59.8050361s)

                                                
                                                
-- stdout --
	* [NoKubernetes-043400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-043400" primary control-plane node in "NoKubernetes-043400" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:50:29.536626    6748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-043400 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-043400 -n NoKubernetes-043400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-043400 -n NoKubernetes-043400: exit status 7 (3.0734827s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:55:29.290658    4844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0421 20:55:32.200152    4844 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-043400".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-043400 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-043400:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-043400" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10800.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-190300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (27m39s)
	TestNetworkPlugins/group/auto (7m4s)
	TestNetworkPlugins/group/auto/Start (7m4s)
	TestNetworkPlugins/group/kindnet (11s)
	TestNetworkPlugins/group/kindnet/Start (11s)
	TestPause (7m18s)
	TestPause/serial (7m18s)
	TestPause/serial/SecondStartNoReconfiguration (1m44s)
	TestStartStop (27m16s)
	TestStoppedBinaryUpgrade (12m33s)
	TestStoppedBinaryUpgrade/Upgrade (12m32s)

                                                
                                                
goroutine 2346 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 8 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000158ea0, 0xc00090bbb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006f4438, {0x4f1c540, 0x2a, 0x2a}, {0x2be8373?, 0xa2806f?, 0x4f3f760?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006d12c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006d12c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000070e80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2151 [chan receive, 27 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0022e29c0, 0xc0021620f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2054
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 22 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 11
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2273 [syscall, locked to thread]:
syscall.SyscallN(0x9eddc8?, {0xc002a63b20?, 0x987ea5?, 0x10?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1806a683f80?, 0xc002a63b80?, 0x97fdd6?, 0x4fccbc0?, 0xc002a63c08?, 0x972985?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6d4, {0xc00287ad04?, 0x72fc, 0xa2417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0006d8c88?, {0xc00287ad04?, 0x0?, 0x20000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0006d8c88, {0xc00287ad04, 0x72fc, 0x72fc})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000548300, {0xc00287ad04?, 0x0?, 0xfe1a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0025323c0, {0x3b50de0, 0xc00011c848})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b50f20, 0xc0025323c0}, {0x3b50de0, 0xc00011c848}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x2b9de9c?, {0x3b50f20, 0xc0025323c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x3b50f20?, 0xc0025323c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3b50f20, 0xc0025323c0}, {0x3b50ea0, 0xc000548300}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x35fadf0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2328
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 187 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007e6380, 0xc0001060c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 197
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 186 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00087b080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 197
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2154 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022e2ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022e2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022e2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022e2ea0, 0xc0026b2180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 707 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x1806a634d70, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x97fdd6?, 0x4fccbc0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0008c1ba0, 0xc002505bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0008c1b88, 0x39c, {0xc0006e40f0?, 0x0?, 0x0?}, 0xc0006a0008?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0008c1b88, 0xc002505d90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0008c1b88)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc000b072a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000b072a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00002e0f0, {0x3b68cc0, 0xc000b072a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc00002e0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0020f4820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 696
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2114 [chan receive, 13 minutes]:
testing.(*T).Run(0xc0022e2680, {0x2b907eb?, 0x3005753e800?}, 0xc002734300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0022e2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc0022e2680, 0x35faf28)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2156 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022e31e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022e31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022e31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022e31e0, 0xc0026b2280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 203 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0007e6350, 0x3c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2684b80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00087af60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007e6380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000067420, {0x3b52220, 0xc00087da40}, 0x1, 0xc0001060c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000067420, 0x3b9aca00, 0x0, 0x1, 0xc0001060c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 187
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 204 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3b75c20, 0xc0001060c0}, 0xc000907f50, 0xc000907f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3b75c20, 0xc0001060c0}, 0x10?, 0xc000907f50, 0xc000907f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3b75c20?, 0xc0001060c0?}, 0xc0004a07e0?, 0xc0004a0850?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0004a0af0?, 0xc0004a0b60?, 0xc0004a0bd0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 187
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 205 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 204
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2272 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000bd5b20?, 0x987ea5?, 0x4fccbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000bd5b35?, 0xc000bd5b80?, 0x97fdd6?, 0x4fccbc0?, 0xc000bd5c08?, 0x972985?, 0x18064be0598?, 0x4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x418, {0xc000970300?, 0x500, 0xa2417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0006d8788?, {0xc000970300?, 0x9ac1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0006d8788, {0xc000970300, 0x500, 0x500})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0005482b8, {0xc000970300?, 0xc000b7c8c0?, 0x224?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002532390, {0x3b50de0, 0xc000b92590})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b50f20, 0xc002532390}, {0x3b50de0, 0xc000b92590}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000bd5e78?, {0x3b50f20, 0xc002532390})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000bd5f38?, {0x3b50f20?, 0xc002532390?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3b50f20, 0xc002532390}, {0x3b50ea0, 0xc0005482b8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00241b080?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2328
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2054 [chan receive, 29 minutes]:
testing.(*T).Run(0xc000158820, {0x2b8c851?, 0x9df48d?}, 0xc0021620f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000158820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc000158820, 0x35faed8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2330 [syscall, locked to thread]:
syscall.SyscallN(0xc000158d00?, {0xc000a61b20?, 0x987ea5?, 0x4fccbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0xc000a61b80?, 0x97fdd6?, 0x4fccbc0?, 0xc000a61c08?, 0x97281b?, 0x968ba6?, 0xc000a61b41?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x718, {0xc000b6cdef?, 0x211, 0xa2417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00210a288?, {0xc000b6cdef?, 0x9ac1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00210a288, {0xc000b6cdef, 0x211, 0x211})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000186ac8, {0xc000b6cdef?, 0xc000a61d98?, 0x6a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00217c480, {0x3b50de0, 0xc00011caf0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b50f20, 0xc00217c480}, {0x3b50de0, 0xc00011caf0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b50f20, 0xc00217c480})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x970c36?, {0x3b50f20?, 0xc00217c480?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3b50f20, 0xc00217c480}, {0x3b50ea0, 0xc000186ac8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0001a0f60?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2329
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 838 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 837
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2158 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022e3520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022e3520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022e3520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022e3520, 0xc0026b2380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2323 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000a4c340, {0x2bcb73f?, 0x63?}, 0xc0007e6040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc000a4c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc000a4c340, 0xc002532120)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2056
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 837 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3b75c20, 0xc0001060c0}, 0xc002b99f50, 0xc002b99f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3b75c20, 0xc0001060c0}, 0xe0?, 0xc002b99f50, 0xc002b99f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3b75c20?, 0xc0001060c0?}, 0xc002b99fb0?, 0xf06448?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xafe3a5?, 0xc0014571e0?, 0xc0029073e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 804
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1188 [chan send, 144 minutes]:
os/exec.(*Cmd).watchCtx(0xc002ba6420, 0xc0027de840)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 798
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 803 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0029fe4e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 855
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 804 [chan receive, 151 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000873b40, 0xc0001060c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 855
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2311 [syscall, 6 minutes, locked to thread]:
syscall.SyscallN(0x7ffb637a4de0?, {0xc002b8d948?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x730, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0007b2d20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001456c60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001456c60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a4c4e0, 0xc001456c60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc000a4c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:198 +0x728
testing.tRunner(0xc000a4c4e0, 0xc002734300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2114
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 836 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000873b10, 0x36)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2684b80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0029fe3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000873b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005a2740, {0x3b52220, 0xc000a989f0}, 0x1, 0xc0001060c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005a2740, 0x3b9aca00, 0x0, 0x1, 0xc0001060c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 804
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2329 [syscall, locked to thread]:
syscall.SyscallN(0x7ffb637a4de0?, {0xc000bb3bd0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x780, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0007b2810)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0027c62c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0027c62c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a4c680, 0xc0027c62c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000a4c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000a4c680, 0xc00217c3c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2157
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2331 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000bc9b20?, 0x987ea5?, 0x4fccbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x972c41?, 0xc000bc9b80?, 0x97fdd6?, 0x4fccbc0?, 0xc000bc9c08?, 0x972985?, 0x18064be0108?, 0xc000785f67?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x790, {0xc000a03ca4?, 0x35c, 0xa2417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00210a788?, {0xc000a03ca4?, 0x9ac1be?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00210a788, {0xc000a03ca4, 0x35c, 0x35c})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000186b38, {0xc000a03ca4?, 0xc0024c01c0?, 0x1000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00217c4b0, {0x3b50de0, 0xc0005481c8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b50f20, 0xc00217c4b0}, {0x3b50de0, 0xc0005481c8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000bc9e78?, {0x3b50f20, 0xc00217c4b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000bc9f38?, {0x3b50f20?, 0xc00217c4b0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3b50f20, 0xc00217c4b0}, {0x3b50ea0, 0xc000186b38}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000054600?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2329
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2155 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022e3040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022e3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022e3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022e3040, 0xc0026b2200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2157 [chan receive]:
testing.(*T).Run(0xc0022e3380, {0x2b8c856?, 0x3b4ada8?}, 0xc00217c3c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022e3380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0022e3380, 0xc0026b2300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2247 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022e3860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022e3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0022e3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0022e3860, 0xc000873c00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2242
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2332 [select]:
os/exec.(*Cmd).watchCtx(0xc0027c62c0, 0xc000054720)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2329
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2328 [syscall, 8 minutes, locked to thread]:
syscall.SyscallN(0x7ffb637a4de0?, {0xc00250fbd0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x4cc, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002cb0750)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0027c6160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0027c6160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a4c9c0, 0xc0027c6160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000a4c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000a4c9c0, 0xc0025322d0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2152
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2362 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x1806a1ba938?, {0xc000ba1b20?, 0x987ea5?, 0x4fccbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1806a1ba935?, 0xc000ba1b80?, 0x97fdd6?, 0x4fccbc0?, 0xc000ba1c08?, 0x972985?, 0x18064be0eb8?, 0x100041?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x378, {0xc000b6c1e7?, 0x219, 0xa2417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002888288?, {0xc000b6c1e7?, 0x9ac1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002888288, {0xc000b6c1e7, 0x219, 0x219})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000186930, {0xc000b6c1e7?, 0xc0024c1500?, 0x68?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002532270, {0x3b50de0, 0xc000548038})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b50f20, 0xc002532270}, {0x3b50de0, 0xc000548038}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000ba1e78?, {0x3b50f20, 0xc002532270})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000ba1f38?, {0x3b50f20?, 0xc002532270?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3b50f20, 0xc002532270}, {0x3b50ea0, 0xc000186930}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00241a120?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2361
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2360 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc001456c60, 0xc0023ba120)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2311
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 943 [chan send, 151 minutes]:
os/exec.(*Cmd).watchCtx(0xc0027c6f20, 0xc0027c8480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 942
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2138 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0001584e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0001584e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0001584e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0001584e0, 0xc000070080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2363 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x1?, {0xc002e85b20?, 0x987ea5?, 0x4fccbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d?, 0xc002e85b80?, 0x97fdd6?, 0x4fccbc0?, 0xc002e85c08?, 0x972985?, 0x18064be0eb8?, 0x2ba0977?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x4d0, {0xc00095c73d?, 0x18c3, 0xa2417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002888a08?, {0xc00095c73d?, 0x9ac1be?, 0x4000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002888a08, {0xc00095c73d, 0x18c3, 0x18c3})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000186988, {0xc00095c73d?, 0xc00285c1c0?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0025322a0, {0x3b50de0, 0xc00011c110})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b50f20, 0xc0025322a0}, {0x3b50de0, 0xc00011c110}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002e85e78?, {0x3b50f20, 0xc0025322a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc002e85f38?, {0x3b50f20?, 0xc0025322a0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3b50f20, 0xc0025322a0}, {0x3b50ea0, 0xc000186988}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0000547e0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2361
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2361 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffb637a4de0?, {0xc000be3a10?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x70c, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002cb07e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0014569a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0014569a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a4cd00, 0xc0014569a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartNoReconfigure({0x3b75a60, 0xc0006f61c0}, 0xc000a4cd00, {0xc000588040?, 0xc00f1b10c0?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:92 +0x245
k8s.io/minikube/test/integration.TestPause.func1.1(0xc000a4cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc000a4cd00, 0xc0007e6040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2323
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2358 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000a79b20?, 0x987ea5?, 0x4fccbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d?, 0xc000a79b80?, 0x97fdd6?, 0x4fccbc0?, 0xc000a79c08?, 0x972985?, 0x18064be0598?, 0x4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x420, {0xc0025732b0?, 0x550, 0xa2417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002889688?, {0xc0025732b0?, 0x9ac1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002889688, {0xc0025732b0, 0x550, 0x550})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc00011c9e0, {0xc0025732b0?, 0xc000a79d98?, 0x20c?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00217c210, {0x3b50de0, 0xc0001869b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b50f20, 0xc00217c210}, {0x3b50de0, 0xc0001869b0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b50f20, 0xc00217c210})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x970c36?, {0x3b50f20?, 0xc00217c210?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3b50f20, 0xc00217c210}, {0x3b50ea0, 0xc00011c9e0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00241a420?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2311
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2354 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc0027c6160, 0xc0023baea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2328
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2079 [chan receive, 27 minutes]:
testing.(*T).Run(0xc000159520, {0x2b8c851?, 0xab7333?}, 0x35fb0f8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000159520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000159520, 0x35faf20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2245 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000159d40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000159d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000159d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000159d40, 0xc000873b80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2242
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2139 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0001591e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0001591e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0001591e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0001591e0, 0xc000070100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2056 [chan receive, 8 minutes]:
testing.(*T).Run(0xc000159040, {0x2b8dd55?, 0xd18c2e2800?}, 0xc002532120)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc000159040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc000159040, 0x35faef0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2359 [syscall, locked to thread]:
syscall.SyscallN(0x1806a263dd8?, {0xc00265bb20?, 0x987ea5?, 0x4?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1806a263dd8?, 0xc00265bb80?, 0x97fdd6?, 0x4fccbc0?, 0xc00265bc08?, 0x972985?, 0x18064be0a28?, 0x8000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x2e0, {0xc00219fe02?, 0x21fe, 0xa2417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002889b88?, {0xc00219fe02?, 0x2e2f?, 0x2e2f?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002889b88, {0xc00219fe02, 0x21fe, 0x21fe})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc00011ca28, {0xc00219fe02?, 0x3940?, 0x3940?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00217c240, {0x3b50de0, 0xc000b923f0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b50f20, 0xc00217c240}, {0x3b50de0, 0xc000b923f0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x2b9de9c?, {0x3b50f20, 0xc00217c240})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x3b50f20?, 0xc00217c240?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3b50f20, 0xc00217c240}, {0x3b50ea0, 0xc00011ca28}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x35fadf0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2311
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2152 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0022e2b60, {0x2b8c856?, 0x3b4ada8?}, 0xc0025322d0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022e2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0022e2b60, 0xc0026b2080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2246 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022e36c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022e36c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0022e36c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0022e36c0, 0xc000873bc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2242
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2153 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022e2d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022e2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0022e2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0022e2d00, 0xc0026b2100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2151
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2248 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0022e3a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0022e3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0022e3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0022e3a00, 0xc000873c80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2242
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2242 [chan receive, 27 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000159860, 0x35fb0f8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2079
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2244 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000159ba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000159ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000159ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000159ba0, 0xc000873a80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2242
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2243 [chan receive, 27 minutes]:
testing.(*testContext).waitParallel(0xc00090e370)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000159a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000159a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000159a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000159a00, 0xc000873a40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2242
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2364 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014569a0, 0xc0023ba180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2361
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                    

Test pass (152/197)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.12
4 TestDownloadOnly/v1.20.0/preload-exists 0.1
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.57
9 TestDownloadOnly/v1.20.0/DeleteAll 1.47
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.48
12 TestDownloadOnly/v1.30.0/json-events 11.07
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.49
18 TestDownloadOnly/v1.30.0/DeleteAll 1.42
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.38
21 TestBinaryMirror 7.54
22 TestOffline 299.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.34
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.34
27 TestAddons/Setup 403.3
30 TestAddons/parallel/Ingress 68.65
31 TestAddons/parallel/InspektorGadget 27.31
32 TestAddons/parallel/MetricsServer 22.33
33 TestAddons/parallel/HelmTiller 29.13
35 TestAddons/parallel/CSI 90.3
36 TestAddons/parallel/Headlamp 35.53
37 TestAddons/parallel/CloudSpanner 21.56
38 TestAddons/parallel/LocalPath 45.29
39 TestAddons/parallel/NvidiaDevicePlugin 21
40 TestAddons/parallel/Yakd 5.03
43 TestAddons/serial/GCPAuth/Namespaces 0.35
44 TestAddons/StoppedEnableDisable 55.98
45 TestCertOptions 397.55
47 TestDockerFlags 512.37
48 TestForceSystemdFlag 408.44
49 TestForceSystemdEnv 387.05
56 TestErrorSpam/start 18.04
57 TestErrorSpam/status 37.95
58 TestErrorSpam/pause 23.77
59 TestErrorSpam/unpause 23.73
60 TestErrorSpam/stop 58.5
63 TestFunctional/serial/CopySyncFile 0.03
64 TestFunctional/serial/StartWithProxy 249.02
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 132.81
67 TestFunctional/serial/KubeContext 0.16
68 TestFunctional/serial/KubectlGetPods 0.26
71 TestFunctional/serial/CacheCmd/cache/add_remote 26.96
72 TestFunctional/serial/CacheCmd/cache/add_local 11.52
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.3
74 TestFunctional/serial/CacheCmd/cache/list 0.3
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.72
76 TestFunctional/serial/CacheCmd/cache/cache_reload 37.47
77 TestFunctional/serial/CacheCmd/cache/delete 0.62
78 TestFunctional/serial/MinikubeKubectlCmd 0.63
80 TestFunctional/serial/ExtraConfig 131.26
81 TestFunctional/serial/ComponentHealth 0.19
82 TestFunctional/serial/LogsCmd 9.01
83 TestFunctional/serial/LogsFileCmd 11.19
84 TestFunctional/serial/InvalidService 21.47
90 TestFunctional/parallel/StatusCmd 43.99
94 TestFunctional/parallel/ServiceCmdConnect 27.76
95 TestFunctional/parallel/AddonsCmd 0.82
96 TestFunctional/parallel/PersistentVolumeClaim 49.6
98 TestFunctional/parallel/SSHCmd 20.44
99 TestFunctional/parallel/CpCmd 60.65
100 TestFunctional/parallel/MySQL 62.78
101 TestFunctional/parallel/FileSync 10.83
102 TestFunctional/parallel/CertSync 68.69
106 TestFunctional/parallel/NodeLabels 0.23
108 TestFunctional/parallel/NonActiveRuntimeDisabled 12.14
110 TestFunctional/parallel/License 3.66
111 TestFunctional/parallel/ServiceCmd/DeployApp 17.46
112 TestFunctional/parallel/ServiceCmd/List 14.21
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.31
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.71
118 TestFunctional/parallel/ServiceCmd/JSONOutput 13.92
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/Version/short 0.28
127 TestFunctional/parallel/Version/components 8.55
128 TestFunctional/parallel/ImageCommands/ImageListShort 7.69
129 TestFunctional/parallel/ImageCommands/ImageListTable 7.69
130 TestFunctional/parallel/ImageCommands/ImageListJson 7.73
131 TestFunctional/parallel/ImageCommands/ImageListYaml 7.56
132 TestFunctional/parallel/ImageCommands/ImageBuild 27.46
133 TestFunctional/parallel/ImageCommands/Setup 4.91
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 27.98
136 TestFunctional/parallel/DockerEnv/powershell 49.9
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 22.52
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 29.65
140 TestFunctional/parallel/UpdateContextCmd/no_changes 3.15
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.61
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.59
143 TestFunctional/parallel/ProfileCmd/profile_not_create 11.7
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.66
145 TestFunctional/parallel/ProfileCmd/profile_list 12.01
146 TestFunctional/parallel/ImageCommands/ImageRemove 17.5
147 TestFunctional/parallel/ProfileCmd/profile_json_output 11.9
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 19.03
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.2
150 TestFunctional/delete_addon-resizer_images 0.49
151 TestFunctional/delete_my-image_image 0.2
152 TestFunctional/delete_minikube_cached_images 0.2
156 TestMultiControlPlane/serial/StartCluster 734.43
157 TestMultiControlPlane/serial/DeployApp 13.43
159 TestMultiControlPlane/serial/AddWorkerNode 258.8
160 TestMultiControlPlane/serial/NodeLabels 0.2
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.15
162 TestMultiControlPlane/serial/CopyFile 643.84
166 TestImageBuild/serial/Setup 203.4
167 TestImageBuild/serial/NormalBuild 9.94
168 TestImageBuild/serial/BuildWithBuildArg 9.29
169 TestImageBuild/serial/BuildWithDockerIgnore 7.93
170 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.73
174 TestJSONOutput/start/Command 216.41
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 8.17
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 8.04
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 40.71
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 1.58
202 TestMainNoArgs 0.28
203 TestMinikubeProfile 539.15
206 TestMountStart/serial/StartWithMountFirst 160.01
207 TestMountStart/serial/VerifyMountFirst 9.69
208 TestMountStart/serial/StartWithMountSecond 160.42
209 TestMountStart/serial/VerifyMountSecond 9.59
210 TestMountStart/serial/DeleteFirst 28.06
211 TestMountStart/serial/VerifyMountPostDelete 9.69
212 TestMountStart/serial/Stop 30.89
213 TestMountStart/serial/RestartStopped 121.07
214 TestMountStart/serial/VerifyMountPostStop 9.49
217 TestMultiNode/serial/FreshStart2Nodes 435.14
218 TestMultiNode/serial/DeployApp2Nodes 9.34
220 TestMultiNode/serial/AddNode 236.91
221 TestMultiNode/serial/MultiNodeLabels 0.19
222 TestMultiNode/serial/ProfileList 9.91
223 TestMultiNode/serial/CopyFile 371.76
224 TestMultiNode/serial/StopNode 78.89
225 TestMultiNode/serial/StartAfterStop 188.54
230 TestPreload 543.16
231 TestScheduledStopWindows 337.62
236 TestRunningBinaryUpgrade 1123.23
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
x
+
TestDownloadOnly/v1.20.0/json-events (17.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-841000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-841000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.1195487s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-841000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-841000: exit status 85 (566.5819ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-841000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC |          |
	|         | -p download-only-841000        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:23:07
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:23:07.877618   12836 out.go:291] Setting OutFile to fd 676 ...
	I0421 18:23:07.878960   12836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:23:07.878960   12836 out.go:304] Setting ErrFile to fd 680...
	I0421 18:23:07.879081   12836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0421 18:23:07.893320   12836 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0421 18:23:07.905357   12836 out.go:298] Setting JSON to true
	I0421 18:23:07.909826   12836 start.go:129] hostinfo: {"hostname":"minikube6","uptime":9663,"bootTime":1713714124,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 18:23:07.909881   12836 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 18:23:07.921206   12836 out.go:97] [download-only-841000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 18:23:07.921302   12836 notify.go:220] Checking for updates...
	W0421 18:23:07.921302   12836 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0421 18:23:07.924465   12836 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:23:07.928327   12836 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 18:23:07.931046   12836 out.go:169] MINIKUBE_LOCATION=18702
	I0421 18:23:07.933327   12836 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0421 18:23:07.938877   12836 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0421 18:23:07.939745   12836 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:23:13.476773   12836 out.go:97] Using the hyperv driver based on user configuration
	I0421 18:23:13.476922   12836 start.go:297] selected driver: hyperv
	I0421 18:23:13.476922   12836 start.go:901] validating driver "hyperv" against <nil>
	I0421 18:23:13.477147   12836 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:23:13.533317   12836 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0421 18:23:13.535195   12836 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0421 18:23:13.535195   12836 cni.go:84] Creating CNI manager for ""
	I0421 18:23:13.535195   12836 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0421 18:23:13.536104   12836 start.go:340] cluster config:
	{Name:download-only-841000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:23:13.537037   12836 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:23:13.540306   12836 out.go:97] Downloading VM boot image ...
	I0421 18:23:13.540568   12836 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-amd64.iso
	I0421 18:23:17.643970   12836 out.go:97] Starting "download-only-841000" primary control-plane node in "download-only-841000" cluster
	I0421 18:23:17.644299   12836 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0421 18:23:17.688777   12836 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0421 18:23:17.688867   12836 cache.go:56] Caching tarball of preloaded images
	I0421 18:23:17.689282   12836 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0421 18:23:17.692635   12836 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0421 18:23:17.692635   12836 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0421 18:23:17.752352   12836 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0421 18:23:21.281598   12836 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0421 18:23:21.283775   12836 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0421 18:23:22.369367   12836 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0421 18:23:22.369367   12836 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-841000\config.json ...
	I0421 18:23:22.370388   12836 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-841000\config.json: {Name:mka3790dd5c56b67279ba4a68d438294f48dd006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:22.370388   12836 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0421 18:23:22.373824   12836 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-841000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-841000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:23:25.018406    3996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4678835s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-841000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-841000: (1.4824235s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (11.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-510000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-510000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv: (11.065217s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (11.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-510000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-510000: exit status 85 (491.8778ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-841000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC |                     |
	|         | -p download-only-841000        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| delete  | -p download-only-841000        | download-only-841000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC | 21 Apr 24 18:23 UTC |
	| start   | -o=json --download-only        | download-only-510000 | minikube6\jenkins | v1.33.0 | 21 Apr 24 18:23 UTC |                     |
	|         | -p download-only-510000        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/21 18:23:28
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0421 18:23:28.617931    6448 out.go:291] Setting OutFile to fd 716 ...
	I0421 18:23:28.618648    6448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:23:28.618648    6448 out.go:304] Setting ErrFile to fd 640...
	I0421 18:23:28.618648    6448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:23:28.642332    6448 out.go:298] Setting JSON to true
	I0421 18:23:28.646020    6448 start.go:129] hostinfo: {"hostname":"minikube6","uptime":9683,"bootTime":1713714124,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 18:23:28.646020    6448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 18:23:28.659974    6448 out.go:97] [download-only-510000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 18:23:28.663280    6448 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:23:28.661020    6448 notify.go:220] Checking for updates...
	I0421 18:23:28.668899    6448 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 18:23:28.671825    6448 out.go:169] MINIKUBE_LOCATION=18702
	I0421 18:23:28.674839    6448 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0421 18:23:28.680647    6448 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0421 18:23:28.681312    6448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0421 18:23:34.452646    6448 out.go:97] Using the hyperv driver based on user configuration
	I0421 18:23:34.453180    6448 start.go:297] selected driver: hyperv
	I0421 18:23:34.453180    6448 start.go:901] validating driver "hyperv" against <nil>
	I0421 18:23:34.453625    6448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0421 18:23:34.510213    6448 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0421 18:23:34.510523    6448 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0421 18:23:34.510523    6448 cni.go:84] Creating CNI manager for ""
	I0421 18:23:34.511574    6448 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0421 18:23:34.511574    6448 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0421 18:23:34.511855    6448 start.go:340] cluster config:
	{Name:download-only-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0421 18:23:34.512204    6448 iso.go:125] acquiring lock: {Name:mkb4dc928db5c2734f66eedb6a33b43851668f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0421 18:23:34.516068    6448 out.go:97] Starting "download-only-510000" primary control-plane node in "download-only-510000" cluster
	I0421 18:23:34.516068    6448 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 18:23:34.556584    6448 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 18:23:34.557169    6448 cache.go:56] Caching tarball of preloaded images
	I0421 18:23:34.557646    6448 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 18:23:34.561211    6448 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0421 18:23:34.561211    6448 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0421 18:23:34.638069    6448 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0421 18:23:37.364944    6448 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0421 18:23:37.365938    6448 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0421 18:23:38.338465    6448 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0421 18:23:38.339433    6448 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-510000\config.json ...
	I0421 18:23:38.339433    6448 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-510000\config.json: {Name:mkb4f7ff2b25be609321d43d1dba51405619c86e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0421 18:23:38.340165    6448 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0421 18:23:38.341520    6448 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.30.0/kubectl.exe
	
	
	* The control-plane node download-only-510000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-510000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:23:39.598352   14124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.418864s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-510000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-510000: (1.3814469s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.38s)

                                                
                                    
x
+
TestBinaryMirror (7.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-311400 --alsologtostderr --binary-mirror http://127.0.0.1:59571 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-311400 --alsologtostderr --binary-mirror http://127.0.0.1:59571 --driver=hyperv: (6.5539411s)
helpers_test.go:175: Cleaning up "binary-mirror-311400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-311400
--- PASS: TestBinaryMirror (7.54s)

                                                
                                    
x
+
TestOffline (299.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-868700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-868700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m12.4829268s)
helpers_test.go:175: Cleaning up "offline-docker-868700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-868700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-868700: (46.8621977s)
--- PASS: TestOffline (299.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.34s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-519700
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-519700: exit status 85 (343.1986ms)

                                                
                                                
-- stdout --
	* Profile "addons-519700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-519700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:23:53.337732    5576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.34s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-519700
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-519700: exit status 85 (340.2695ms)

                                                
                                                
-- stdout --
	* Profile "addons-519700" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-519700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:23:53.352477   10192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.34s)

                                                
                                    
x
+
TestAddons/Setup (403.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-519700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-519700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m43.3033811s)
--- PASS: TestAddons/Setup (403.30s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (68.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-519700 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-519700 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-519700 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [18b2304b-3c23-47da-ab19-5447cff07c0c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [18b2304b-3c23-47da-ab19-5447cff07c0c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.0099519s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.3953723s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-519700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0421 18:32:05.052325    6676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-519700 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 ip: (2.4973888s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.27.202.1
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 addons disable ingress-dns --alsologtostderr -v=1: (16.3931877s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 addons disable ingress --alsologtostderr -v=1: (22.0953904s)
--- PASS: TestAddons/parallel/Ingress (68.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ckvnx" [252e89d7-799a-4ef2-9c6f-b82714e41de5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0161689s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-519700
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-519700: (22.2937193s)
--- PASS: TestAddons/parallel/InspektorGadget (27.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 18.9968ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-jkvcv" [bb15429d-fae9-46e8-a28c-00e1701e4b66] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0187762s
addons_test.go:415: (dbg) Run:  kubectl --context addons-519700 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 addons disable metrics-server --alsologtostderr -v=1: (17.1067697s)
--- PASS: TestAddons/parallel/MetricsServer (22.33s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (29.13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 8.229ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-xvfwl" [33c57d3b-8b9e-4319-95d2-4d55044ded10] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0116993s
addons_test.go:473: (dbg) Run:  kubectl --context addons-519700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-519700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.0276922s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 addons disable helm-tiller --alsologtostderr -v=1: (16.0625133s)
--- PASS: TestAddons/parallel/HelmTiller (29.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (90.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 23.9895ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-519700 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-519700 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3dcd92b5-d06d-4dc0-b32f-5ae0d20217a7] Pending
helpers_test.go:344: "task-pv-pod" [3dcd92b5-d06d-4dc0-b32f-5ae0d20217a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3dcd92b5-d06d-4dc0-b32f-5ae0d20217a7] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 27.0234514s
addons_test.go:584: (dbg) Run:  kubectl --context addons-519700 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-519700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-519700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-519700 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-519700 delete pod task-pv-pod: (1.3209807s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-519700 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-519700 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-519700 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5f636b82-7e54-4969-b5fa-7ec7fe29c7c4] Pending
helpers_test.go:344: "task-pv-pod-restore" [5f636b82-7e54-4969-b5fa-7ec7fe29c7c4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5f636b82-7e54-4969-b5fa-7ec7fe29c7c4] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0114591s
addons_test.go:626: (dbg) Run:  kubectl --context addons-519700 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-519700 delete pod task-pv-pod-restore: (1.0463708s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-519700 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-519700 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.1102485s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 addons disable volumesnapshots --alsologtostderr -v=1: (16.1761328s)
--- PASS: TestAddons/parallel/CSI (90.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-519700 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-519700 --alsologtostderr -v=1: (16.5122519s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-52bdm" [0f0c4fbf-b05b-47bb-9ef6-1014947dccc1] Pending
helpers_test.go:344: "headlamp-7559bf459f-52bdm" [0f0c4fbf-b05b-47bb-9ef6-1014947dccc1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-52bdm" [0f0c4fbf-b05b-47bb-9ef6-1014947dccc1] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0172455s
--- PASS: TestAddons/parallel/Headlamp (35.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-s5lp5" [07cfdd25-3763-4333-b4b8-2460322b203b] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014728s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-519700
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-519700: (16.5302318s)
--- PASS: TestAddons/parallel/CloudSpanner (21.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (45.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-519700 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-519700 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d6e31b0b-b364-4970-8ca8-6ea8bfcbcdb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d6e31b0b-b364-4970-8ca8-6ea8bfcbcdb2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d6e31b0b-b364-4970-8ca8-6ea8bfcbcdb2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 16.0174212s
addons_test.go:891: (dbg) Run:  kubectl --context addons-519700 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 ssh "cat /opt/local-path-provisioner/pvc-4f455934-1b66-474d-b61f-c07d9fbf4635_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 ssh "cat /opt/local-path-provisioner/pvc-4f455934-1b66-474d-b61f-c07d9fbf4635_default_test-pvc/file1": (10.3074586s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-519700 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-519700 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-519700 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-519700 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.1795209s)
--- PASS: TestAddons/parallel/LocalPath (45.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7fzh9" [176943d4-0626-4c59-ac8a-b6ca63fa35e2] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0139129s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-519700
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-519700: (15.9833238s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.00s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-m99b9" [84e029e1-3daa-4413-b43e-0429bbfd1fc3] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0242431s
--- PASS: TestAddons/parallel/Yakd (5.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-519700 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-519700 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (55.98s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-519700
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-519700: (42.9362391s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-519700
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-519700: (5.2286446s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-519700
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-519700: (5.0063819s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-519700
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-519700: (2.8107757s)
--- PASS: TestAddons/StoppedEnableDisable (55.98s)

                                                
                                    
x
+
TestCertOptions (397.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-338400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-338400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (5m26.3083271s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-338400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-338400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (11.1691532s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-338400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-338400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-338400 -- "sudo cat /etc/kubernetes/admin.conf": (11.099591s)
helpers_test.go:175: Cleaning up "cert-options-338400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-338400
E0421 21:15:36.948257   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-338400: (48.7849362s)
--- PASS: TestCertOptions (397.55s)

                                                
                                    
x
+
TestDockerFlags (512.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-064200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0421 21:02:54.275144   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-064200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (7m29.5611392s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-064200 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-064200 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.908984s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-064200 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-064200 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.0441075s)
helpers_test.go:175: Cleaning up "docker-flags-064200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-064200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-064200: (42.8497341s)
--- PASS: TestDockerFlags (512.37s)

                                                
                                    
x
+
TestForceSystemdFlag (408.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-149100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-149100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m51.5615017s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-149100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-149100 ssh "docker info --format {{.CgroupDriver}}": (10.0703481s)
helpers_test.go:175: Cleaning up "force-systemd-flag-149100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-149100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-149100: (46.8029327s)
--- PASS: TestForceSystemdFlag (408.44s)

                                                
                                    
x
+
TestForceSystemdEnv (387.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-214100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0421 20:55:36.936572   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-214100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m34.7514444s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-214100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-214100 ssh "docker info --format {{.CgroupDriver}}": (10.1481s)
helpers_test.go:175: Cleaning up "force-systemd-env-214100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-214100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-214100: (42.1441906s)
--- PASS: TestForceSystemdEnv (387.05s)

                                                
                                    
x
+
TestErrorSpam/start (18.04s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 start --dry-run: (5.9242685s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 start --dry-run: (6.0957718s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 start --dry-run: (6.0151466s)
--- PASS: TestErrorSpam/start (18.04s)

                                                
                                    
x
+
TestErrorSpam/status (37.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 status: (13.0002063s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 status: (12.3633408s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 status: (12.5811558s)
--- PASS: TestErrorSpam/status (37.95s)

                                                
                                    
x
+
TestErrorSpam/pause (23.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 pause: (8.1163772s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 pause: (7.8294287s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 pause: (7.8214642s)
--- PASS: TestErrorSpam/pause (23.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 unpause: (8.0377479s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 unpause: (7.8954069s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 unpause: (7.7976472s)
--- PASS: TestErrorSpam/unpause (23.73s)

                                                
                                    
x
+
TestErrorSpam/stop (58.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 stop
E0421 18:40:36.880872   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 stop: (36.1270166s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 stop
E0421 18:41:04.664966   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 stop: (11.346029s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-389800 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-389800 stop: (11.0260779s)
--- PASS: TestErrorSpam/stop (58.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\13800\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (249.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0421 18:45:36.879755   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-808300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m9.0076742s)
--- PASS: TestFunctional/serial/StartWithProxy (249.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (132.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-808300 --alsologtostderr -v=8: (2m12.8072166s)
functional_test.go:659: soft start took 2m12.8085571s for "functional-808300" cluster.
--- PASS: TestFunctional/serial/SoftStart (132.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.16s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-808300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:3.1: (9.2262956s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:3.3: (8.9233858s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache add registry.k8s.io/pause:latest: (8.8077674s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-808300 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3889495069\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-808300 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3889495069\001: (2.5360619s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache add minikube-local-cache-test:functional-808300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache add minikube-local-cache-test:functional-808300: (8.4339943s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache delete minikube-local-cache-test:functional-808300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-808300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl images: (9.7163347s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (37.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.7625672s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.7322491s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:48:57.174671    2272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cache reload: (8.436084s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.5401966s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (37.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 kubectl -- --context functional-808300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (131.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0421 18:50:36.884164   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 18:52:00.036837   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-808300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m11.2607931s)
functional_test.go:757: restart took 2m11.2617847s for "functional-808300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (131.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-808300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (9.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs: (9.0061054s)
--- PASS: TestFunctional/serial/LogsCmd (9.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd436076502\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd436076502\001\logs.txt: (11.1876826s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-808300 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-808300
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-808300: exit status 115 (17.2549934s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.27.199.19:31975 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:52:36.311701    8652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-808300 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (21.47s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (43.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 status: (14.9843268s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.879225s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 status -o json: (15.1248179s)
--- PASS: TestFunctional/parallel/StatusCmd (43.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-808300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-808300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-v2m5g" [b0026a22-0afb-42fb-9ecb-3c5a5054b17a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-v2m5g" [b0026a22-0afb-42fb-9ecb-3c5a5054b17a] Running
E0421 18:55:36.890786   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0154799s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 service hello-node-connect --url: (18.2902394s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.27.199.19:32306
functional_test.go:1671: http://172.27.199.19:32306: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-v2m5g

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.27.199.19:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.27.199.19:32306
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [24f4cf93-e486-46a0-89a5-a94fe4593b32] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0128002s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-808300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-808300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-808300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-808300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cc9ff67a-6911-4d97-9219-36ae3f9873ba] Pending
helpers_test.go:344: "sp-pod" [cc9ff67a-6911-4d97-9219-36ae3f9873ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cc9ff67a-6911-4d97-9219-36ae3f9873ba] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.0195248s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-808300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-808300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-808300 delete -f testdata/storage-provisioner/pod.yaml: (2.0481819s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-808300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8bd0b60c-37fa-407d-8ff8-bcfe016d5644] Pending
helpers_test.go:344: "sp-pod" [8bd0b60c-37fa-407d-8ff8-bcfe016d5644] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8bd0b60c-37fa-407d-8ff8-bcfe016d5644] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.019102s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-808300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "echo hello": (10.2405941s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "cat /etc/hostname": (10.2002909s)
--- PASS: TestFunctional/parallel/SSHCmd (20.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (60.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.0796403s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /home/docker/cp-test.txt": (10.0257132s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cp functional-808300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd3690067788\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cp functional-808300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd3690067788\001\cp-test.txt: (11.0890795s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /home/docker/cp-test.txt": (11.4689968s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.8308776s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh -n functional-808300 "sudo cat /tmp/does/not/exist/cp-test.txt": (12.1435191s)
--- PASS: TestFunctional/parallel/CpCmd (60.65s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (62.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-808300 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-b7z77" [894132e9-3274-4b72-88c8-3369be4346f4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-b7z77" [894132e9-3274-4b72-88c8-3369be4346f4] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 48.0111333s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;": exit status 1 (339.2815ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;": exit status 1 (288.2444ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;": exit status 1 (311.0415ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;": exit status 1 (356.3072ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;": exit status 1 (339.0155ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-808300 exec mysql-64454c8b5c-b7z77 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (62.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13800/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/test/nested/copy/13800/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/test/nested/copy/13800/hosts": (10.8206751s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.83s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (68.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13800.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/13800.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/13800.pem": (11.5954703s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13800.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /usr/share/ca-certificates/13800.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /usr/share/ca-certificates/13800.pem": (11.5737383s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.7840197s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/138002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/138002.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/138002.pem": (11.1303023s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/138002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /usr/share/ca-certificates/138002.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /usr/share/ca-certificates/138002.pem": (11.7330585s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.8600924s)
--- PASS: TestFunctional/parallel/CertSync (68.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-808300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 ssh "sudo systemctl is-active crio": exit status 1 (12.1417027s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:53:43.948778   10396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.6352593s)
--- PASS: TestFunctional/parallel/License (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-808300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-808300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-gndm9" [18de1d6a-284d-48fc-b69e-2a819a92c8fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-gndm9" [18de1d6a-284d-48fc-b69e-2a819a92c8fa] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.0105734s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 service list: (14.2059247s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 256: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4792: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-808300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [aecdeab5-11fa-4ec0-b90c-69445cbfe572] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [aecdeab5-11fa-4ec0-b90c-69445cbfe572] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0253673s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 service list -o json: (13.9226193s)
functional_test.go:1490: Took "13.9233276s" to run "out/minikube-windows-amd64.exe -p functional-808300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.92s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-808300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 14168: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 version -o=json --components: (8.5476044s)
--- PASS: TestFunctional/parallel/Version/components (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls --format short --alsologtostderr: (7.6897455s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-808300
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-808300
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image ls --format short --alsologtostderr:
W0421 18:56:12.287391    4820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0421 18:56:12.374383    4820 out.go:291] Setting OutFile to fd 780 ...
I0421 18:56:12.375376    4820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:12.375376    4820 out.go:304] Setting ErrFile to fd 952...
I0421 18:56:12.375376    4820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:12.392378    4820 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:12.393368    4820 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:12.394403    4820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:14.675411    4820 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:14.675411    4820 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:14.689407    4820 ssh_runner.go:195] Run: systemctl --version
I0421 18:56:14.689407    4820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:16.864477    4820 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:16.864477    4820 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:16.864585    4820 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0421 18:56:19.536132    4820 main.go:141] libmachine: [stdout =====>] : 172.27.199.19

                                                
                                                
I0421 18:56:19.536132    4820 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:19.536372    4820 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0421 18:56:19.650999    4820 ssh_runner.go:235] Completed: systemctl --version: (4.9614546s)
I0421 18:56:19.661655    4820 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls --format table --alsologtostderr: (7.6938623s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-808300 | 8e253d350b360 | 30B    |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/google-containers/addon-resizer      | functional-808300 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| docker.io/library/nginx                     | latest            | 2ac752d7aeb1d | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | 11d76b979f02d | 48.3MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image ls --format table --alsologtostderr:
W0421 18:56:28.490390    5940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0421 18:56:28.583502    5940 out.go:291] Setting OutFile to fd 772 ...
I0421 18:56:28.600736    5940 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:28.600736    5940 out.go:304] Setting ErrFile to fd 876...
I0421 18:56:28.600949    5940 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:28.616262    5940 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:28.617352    5940 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:28.618198    5940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:30.869674    5940 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:30.869674    5940 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:30.884915    5940 ssh_runner.go:195] Run: systemctl --version
I0421 18:56:30.884915    5940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:33.147001    5940 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:33.147070    5940 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:33.147155    5940 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0421 18:56:35.822577    5940 main.go:141] libmachine: [stdout =====>] : 172.27.199.19

                                                
                                                
I0421 18:56:35.822577    5940 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:35.823547    5940 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0421 18:56:35.951079    5940 ssh_runner.go:235] Completed: systemctl --version: (5.0661279s)
I0421 18:56:35.964528    5940 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls --format json --alsologtostderr: (7.7276428s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image ls --format json --alsologtostderr:
[{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132
622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-808300"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8e253d350b360435dff6ee99ad4290471cf018a78d510bf12f293c3ce2e7ecc9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-te
st:functional-808300"],"size":"30"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image ls --format json --alsologtostderr:
W0421 18:56:20.766801    9408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0421 18:56:20.857530    9408 out.go:291] Setting OutFile to fd 744 ...
I0421 18:56:20.857530    9408 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:20.857530    9408 out.go:304] Setting ErrFile to fd 736...
I0421 18:56:20.857530    9408 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:20.875561    9408 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:20.875561    9408 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:20.953825    9408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:23.180118    9408 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:23.180118    9408 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:23.193121    9408 ssh_runner.go:195] Run: systemctl --version
I0421 18:56:23.194117    9408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:25.465864    9408 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:25.467163    9408 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:25.467163    9408 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0421 18:56:28.195297    9408 main.go:141] libmachine: [stdout =====>] : 172.27.199.19

                                                
                                                
I0421 18:56:28.195297    9408 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:28.195976    9408 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0421 18:56:28.304518    9408 ssh_runner.go:235] Completed: systemctl --version: (5.1113602s)
I0421 18:56:28.316254    9408 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls --format yaml --alsologtostderr: (7.5588182s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image ls --format yaml --alsologtostderr:
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-808300
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 8e253d350b360435dff6ee99ad4290471cf018a78d510bf12f293c3ce2e7ecc9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-808300
size: "30"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: 11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image ls --format yaml --alsologtostderr:
W0421 18:56:13.201870    9416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0421 18:56:13.300864    9416 out.go:291] Setting OutFile to fd 904 ...
I0421 18:56:13.318002    9416 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:13.318002    9416 out.go:304] Setting ErrFile to fd 736...
I0421 18:56:13.318002    9416 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:13.338531    9416 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:13.339520    9416 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:13.339520    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:15.546206    9416 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:15.546206    9416 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:15.560206    9416 ssh_runner.go:195] Run: systemctl --version
I0421 18:56:15.560206    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:17.749775    9416 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:17.749775    9416 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:17.749775    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0421 18:56:20.428729    9416 main.go:141] libmachine: [stdout =====>] : 172.27.199.19

                                                
                                                
I0421 18:56:20.428829    9416 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:20.429088    9416 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0421 18:56:20.528682    9416 ssh_runner.go:235] Completed: systemctl --version: (4.9684407s)
I0421 18:56:20.543551    9416 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (27.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-808300 ssh pgrep buildkitd: exit status 1 (9.9172015s)

                                                
                                                
** stderr ** 
	W0421 18:56:19.990438    4680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image build -t localhost/my-image:functional-808300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image build -t localhost/my-image:functional-808300 testdata\build --alsologtostderr: (10.0849369s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-808300 image build -t localhost/my-image:functional-808300 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in be2d6002ef22
---> Removed intermediate container be2d6002ef22
---> 9a9cc350b137
Step 3/3 : ADD content.txt /
---> 48dbda8c932b
Successfully built 48dbda8c932b
Successfully tagged localhost/my-image:functional-808300
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-808300 image build -t localhost/my-image:functional-808300 testdata\build --alsologtostderr:
W0421 18:56:29.901357    7084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0421 18:56:29.991896    7084 out.go:291] Setting OutFile to fd 780 ...
I0421 18:56:30.017451    7084 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:30.017451    7084 out.go:304] Setting ErrFile to fd 716...
I0421 18:56:30.017451    7084 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0421 18:56:30.032434    7084 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:30.049307    7084 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0421 18:56:30.050046    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:32.254587    7084 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:32.254587    7084 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:32.268592    7084 ssh_runner.go:195] Run: systemctl --version
I0421 18:56:32.268592    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-808300 ).state
I0421 18:56:34.502127    7084 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0421 18:56:34.502127    7084 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:34.502127    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-808300 ).networkadapters[0]).ipaddresses[0]
I0421 18:56:37.186084    7084 main.go:141] libmachine: [stdout =====>] : 172.27.199.19

                                                
                                                
I0421 18:56:37.186084    7084 main.go:141] libmachine: [stderr =====>] : 
I0421 18:56:37.188001    7084 sshutil.go:53] new ssh client: &{IP:172.27.199.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-808300\id_rsa Username:docker}
I0421 18:56:37.310441    7084 ssh_runner.go:235] Completed: systemctl --version: (5.0416621s)
I0421 18:56:37.310441    7084 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3413110077.tar
I0421 18:56:37.326700    7084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0421 18:56:37.366192    7084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3413110077.tar
I0421 18:56:37.377920    7084 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3413110077.tar: stat -c "%s %y" /var/lib/minikube/build/build.3413110077.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3413110077.tar': No such file or directory
I0421 18:56:37.378517    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3413110077.tar --> /var/lib/minikube/build/build.3413110077.tar (3072 bytes)
I0421 18:56:37.449220    7084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3413110077
I0421 18:56:37.485466    7084 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3413110077 -xf /var/lib/minikube/build/build.3413110077.tar
I0421 18:56:37.505630    7084 docker.go:360] Building image: /var/lib/minikube/build/build.3413110077
I0421 18:56:37.516012    7084 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-808300 /var/lib/minikube/build/build.3413110077
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0421 18:56:39.744888    7084 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-808300 /var/lib/minikube/build/build.3413110077: (2.2288593s)
I0421 18:56:39.760918    7084 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3413110077
I0421 18:56:39.801967    7084 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3413110077.tar
I0421 18:56:39.823238    7084 build_images.go:217] Built localhost/my-image:functional-808300 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3413110077.tar
I0421 18:56:39.823238    7084 build_images.go:133] succeeded building to: functional-808300
I0421 18:56:39.823238    7084 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (7.4591994s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (27.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.5717305s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-808300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (27.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (18.8552204s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (9.1261137s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (27.98s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (49.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-808300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-808300"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-808300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-808300": (32.7266831s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-808300 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-808300 docker-env | Invoke-Expression ; docker images": (17.156386s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (49.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (22.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (13.9076629s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (8.6107853s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (22.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.4369828s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-808300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image load --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (16.5656762s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (8.356419s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2: (3.147266s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2: (2.611635s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 update-context --alsologtostderr -v=2: (2.5847301s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.1722381s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image save gcr.io/google-containers/addon-resizer:functional-808300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image save gcr.io/google-containers/addon-resizer:functional-808300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.6591566s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (11.5500235s)
functional_test.go:1311: Took "11.5500235s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "463.3122ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image rm gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image rm gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (9.4672989s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (8.0304572s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.6141893s)
functional_test.go:1362: Took "11.6141893s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "289.4853ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.4141598s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image ls: (7.6155881s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-808300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-808300 image save --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-808300 image save --daemon gcr.io/google-containers/addon-resizer:functional-808300 --alsologtostderr: (10.7791792s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-808300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.20s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.49s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-808300
--- PASS: TestFunctional/delete_addon-resizer_images (0.49s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-808300
--- PASS: TestFunctional/delete_my-image_image (0.20s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-808300
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (734.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-736000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0421 19:02:54.227473   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:54.242791   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:54.258169   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:54.289435   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:54.336397   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:54.431043   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:54.604141   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:54.938124   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:55.588079   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:56.880679   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:02:59.452988   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:03:04.584238   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:03:14.826985   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:03:35.314723   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:04:16.280271   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:05:36.886494   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 19:05:38.209091   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:07:54.223882   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:08:22.053429   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:08:40.050241   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 19:10:36.890559   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 19:12:54.221616   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-736000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m37.1849085s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 status -v=7 --alsologtostderr: (37.2426442s)
--- PASS: TestMultiControlPlane/serial/StartCluster (734.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-736000 -- rollout status deployment/busybox: (3.8504023s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-cmvt9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-cmvt9 -- nslookup kubernetes.io: (2.0077482s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-nttt5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-pnbbn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-pnbbn -- nslookup kubernetes.io: (1.7487713s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-cmvt9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-nttt5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-pnbbn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-cmvt9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-nttt5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-736000 -- exec busybox-fc5497c4f-pnbbn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (258.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-736000 -v=7 --alsologtostderr
E0421 19:15:36.892540   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 19:17:54.237929   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-736000 -v=7 --alsologtostderr: (3m28.9934879s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 status -v=7 --alsologtostderr
E0421 19:19:17.420166   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 status -v=7 --alsologtostderr: (49.8099829s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (258.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-736000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.1487979s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (643.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 status --output json -v=7 --alsologtostderr
E0421 19:20:36.899319   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 status --output json -v=7 --alsologtostderr: (49.3960424s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp testdata\cp-test.txt ha-736000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp testdata\cp-test.txt ha-736000:/home/docker/cp-test.txt: (9.7875683s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt": (9.7361916s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000.txt: (9.7605495s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt": (9.8120652s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000_ha-736000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000_ha-736000-m02.txt: (17.0336573s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt": (9.7107107s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m02.txt": (9.8182568s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000_ha-736000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000_ha-736000-m03.txt: (17.2303614s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt": (9.841049s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m03.txt": (9.6747793s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000_ha-736000-m04.txt
E0421 19:22:54.234924   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000_ha-736000-m04.txt: (17.1003932s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test.txt": (9.7503958s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000_ha-736000-m04.txt": (9.718956s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp testdata\cp-test.txt ha-736000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp testdata\cp-test.txt ha-736000-m02:/home/docker/cp-test.txt: (9.8435404s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt": (9.7306853s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000-m02.txt: (9.7656329s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt": (9.7170913s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m02_ha-736000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m02_ha-736000.txt: (17.0225272s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt": (9.7054934s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000.txt": (9.7464706s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000-m02_ha-736000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000-m02_ha-736000-m03.txt: (17.0364772s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt": (9.7463172s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000-m03.txt": (9.6895218s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000-m02_ha-736000-m04.txt
E0421 19:25:20.064914   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m02:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000-m02_ha-736000-m04.txt: (16.8891026s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test.txt": (9.8171357s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000-m04.txt"
E0421 19:25:36.898771   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000-m02_ha-736000-m04.txt": (9.6436502s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp testdata\cp-test.txt ha-736000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp testdata\cp-test.txt ha-736000-m03:/home/docker/cp-test.txt: (9.7927865s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt": (9.7317728s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000-m03.txt: (9.7281042s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt": (9.6718662s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m03_ha-736000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m03_ha-736000.txt: (17.0718456s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt": (9.7666099s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000.txt": (9.8840261s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000-m03_ha-736000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000-m03_ha-736000-m02.txt: (17.0087763s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt": (9.7959395s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000-m02.txt": (9.7968806s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000-m03_ha-736000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m03:/home/docker/cp-test.txt ha-736000-m04:/home/docker/cp-test_ha-736000-m03_ha-736000-m04.txt: (17.0776652s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt"
E0421 19:27:54.242423   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test.txt": (9.7797836s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test_ha-736000-m03_ha-736000-m04.txt": (9.7340815s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp testdata\cp-test.txt ha-736000-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp testdata\cp-test.txt ha-736000-m04:/home/docker/cp-test.txt: (9.7359073s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt": (9.7303076s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3374611387\001\cp-test_ha-736000-m04.txt: (9.7811009s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt": (9.7429217s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m04_ha-736000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000:/home/docker/cp-test_ha-736000-m04_ha-736000.txt: (16.962308s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt": (9.7549414s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000.txt": (9.7004991s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000-m04_ha-736000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000-m02:/home/docker/cp-test_ha-736000-m04_ha-736000-m02.txt: (17.0625458s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt": (9.8577366s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m02 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000-m02.txt": (9.7583082s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000-m04_ha-736000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 cp ha-736000-m04:/home/docker/cp-test.txt ha-736000-m03:/home/docker/cp-test_ha-736000-m04_ha-736000-m03.txt: (16.8067915s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m04 "sudo cat /home/docker/cp-test.txt": (9.6694458s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000-m03.txt"
E0421 19:30:36.908250   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-736000 ssh -n ha-736000-m03 "sudo cat /home/docker/cp-test_ha-736000-m04_ha-736000-m03.txt": (9.6790941s)
--- PASS: TestMultiControlPlane/serial/CopyFile (643.84s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (203.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-327600 --driver=hyperv
E0421 19:35:36.899148   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 19:35:57.429316   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:37:54.241827   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-327600 --driver=hyperv: (3m23.3951466s)
--- PASS: TestImageBuild/serial/Setup (203.40s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-327600
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-327600: (9.9373907s)
--- PASS: TestImageBuild/serial/NormalBuild (9.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-327600
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-327600: (9.2911226s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-327600
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-327600: (7.9338938s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-327600
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-327600: (7.7312512s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (216.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-307400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0421 19:40:36.915292   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 19:42:00.073782   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 19:42:54.234164   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-307400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m36.4129001s)
--- PASS: TestJSONOutput/start/Command (216.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.17s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-307400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-307400 --output=json --user=testUser: (8.1676642s)
--- PASS: TestJSONOutput/pause/Command (8.17s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-307400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-307400 --output=json --user=testUser: (8.0435806s)
--- PASS: TestJSONOutput/unpause/Command (8.04s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (40.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-307400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-307400 --output=json --user=testUser: (40.7127976s)
--- PASS: TestJSONOutput/stop/Command (40.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-032700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-032700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (314.7281ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"864d9b36-a4b1-48d5-bbdb-3f7a7cac50f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-032700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1067e88a-6779-47a3-a3f9-f3dc2a8db0cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"e8840caa-348a-45e9-9d4a-d1ff3ca9dce5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68f82b68-e144-43a1-9fb9-efc72ce2b8ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"5f33b164-a800-4264-bd12-a2c058310cb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18702"}}
	{"specversion":"1.0","id":"0a94e438-2822-449c-bdae-6cc1f9971544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e14f4a09-dd76-4ec1-84be-ec1137faa584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 19:44:11.599171    7580 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-032700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-032700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-032700: (1.2640796s)
--- PASS: TestErrorJSONOutput (1.58s)

                                                
                                    
x
+
TestMainNoArgs (0.28s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.28s)

                                                
                                    
x
+
TestMinikubeProfile (539.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-032900 --driver=hyperv
E0421 19:45:36.907098   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-032900 --driver=hyperv: (3m24.5836128s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-032900 --driver=hyperv
E0421 19:47:54.242578   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:50:36.913026   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-032900 --driver=hyperv: (3m25.8792245s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-032900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.5248931s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-032900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.4335357s)
helpers_test.go:175: Cleaning up "second-032900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-032900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-032900: (42.2307173s)
helpers_test.go:175: Cleaning up "first-032900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-032900
E0421 19:52:37.440143   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:52:54.247263   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-032900: (46.5251708s)
--- PASS: TestMinikubeProfile (539.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (160.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-945100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0421 19:55:36.918919   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-945100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m39.0114877s)
--- PASS: TestMountStart/serial/StartWithMountFirst (160.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-945100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-945100 ssh -- ls /minikube-host: (9.6907916s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (160.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-945100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0421 19:57:54.243399   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 19:58:40.092329   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-945100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m39.4097787s)
--- PASS: TestMountStart/serial/StartWithMountSecond (160.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.59s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-945100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-945100 ssh -- ls /minikube-host: (9.5848104s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.59s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (28.06s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-945100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-945100 --alsologtostderr -v=5: (28.0550194s)
--- PASS: TestMountStart/serial/DeleteFirst (28.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-945100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-945100 ssh -- ls /minikube-host: (9.6896335s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.69s)

                                                
                                    
x
+
TestMountStart/serial/Stop (30.89s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-945100
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-945100: (30.8878117s)
--- PASS: TestMountStart/serial/Stop (30.89s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (121.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-945100
E0421 20:00:36.913179   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-945100: (2m0.0520844s)
--- PASS: TestMountStart/serial/RestartStopped (121.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-945100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-945100 ssh -- ls /minikube-host: (9.4933471s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.49s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (435.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-152500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0421 20:02:54.255445   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 20:05:36.920209   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 20:07:54.247729   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 20:09:17.463552   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-152500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m50.7698655s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 status --alsologtostderr: (24.3671213s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (435.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- rollout status deployment/busybox: (3.0133415s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-82tdr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-82tdr -- nslookup kubernetes.io: (1.8826399s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-l6544 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-82tdr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-l6544 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-82tdr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-152500 -- exec busybox-fc5497c4f-l6544 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.34s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (236.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-152500 -v 3 --alsologtostderr
E0421 20:12:54.248520   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-152500 -v 3 --alsologtostderr: (3m19.9542088s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 status --alsologtostderr: (36.9560832s)
--- PASS: TestMultiNode/serial/AddNode (236.91s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-152500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.9140667s)
--- PASS: TestMultiNode/serial/ProfileList (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (371.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 status --output json --alsologtostderr
E0421 20:15:20.111959   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 20:15:36.927080   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 status --output json --alsologtostderr: (36.7341717s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp testdata\cp-test.txt multinode-152500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp testdata\cp-test.txt multinode-152500:/home/docker/cp-test.txt: (9.7645577s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test.txt": (9.692176s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3719690202\001\cp-test_multinode-152500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3719690202\001\cp-test_multinode-152500.txt: (9.6670967s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test.txt": (9.615664s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500:/home/docker/cp-test.txt multinode-152500-m02:/home/docker/cp-test_multinode-152500_multinode-152500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500:/home/docker/cp-test.txt multinode-152500-m02:/home/docker/cp-test_multinode-152500_multinode-152500-m02.txt: (16.9765269s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test.txt": (9.7291003s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test_multinode-152500_multinode-152500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test_multinode-152500_multinode-152500-m02.txt": (9.7502501s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500:/home/docker/cp-test.txt multinode-152500-m03:/home/docker/cp-test_multinode-152500_multinode-152500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500:/home/docker/cp-test.txt multinode-152500-m03:/home/docker/cp-test_multinode-152500_multinode-152500-m03.txt: (17.3334101s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test.txt": (9.7239246s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test_multinode-152500_multinode-152500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test_multinode-152500_multinode-152500-m03.txt": (9.6326626s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp testdata\cp-test.txt multinode-152500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp testdata\cp-test.txt multinode-152500-m02:/home/docker/cp-test.txt: (9.9163943s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test.txt"
E0421 20:17:54.257451   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test.txt": (9.7150161s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3719690202\001\cp-test_multinode-152500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3719690202\001\cp-test_multinode-152500-m02.txt: (9.6846275s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test.txt": (9.6544493s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt multinode-152500:/home/docker/cp-test_multinode-152500-m02_multinode-152500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt multinode-152500:/home/docker/cp-test_multinode-152500-m02_multinode-152500.txt: (16.9083602s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test.txt": (9.6641906s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test_multinode-152500-m02_multinode-152500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test_multinode-152500-m02_multinode-152500.txt": (9.7517827s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt multinode-152500-m03:/home/docker/cp-test_multinode-152500-m02_multinode-152500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m02:/home/docker/cp-test.txt multinode-152500-m03:/home/docker/cp-test_multinode-152500-m02_multinode-152500-m03.txt: (16.9880457s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test.txt": (9.610487s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test_multinode-152500-m02_multinode-152500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test_multinode-152500-m02_multinode-152500-m03.txt": (9.7380729s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp testdata\cp-test.txt multinode-152500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp testdata\cp-test.txt multinode-152500-m03:/home/docker/cp-test.txt: (9.7552322s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test.txt": (9.6951929s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3719690202\001\cp-test_multinode-152500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile3719690202\001\cp-test_multinode-152500-m03.txt: (9.7760072s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test.txt": (9.7030037s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt multinode-152500:/home/docker/cp-test_multinode-152500-m03_multinode-152500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt multinode-152500:/home/docker/cp-test_multinode-152500-m03_multinode-152500.txt: (16.8182612s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test.txt": (9.6854303s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test_multinode-152500-m03_multinode-152500.txt"
E0421 20:20:36.931081   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500 "sudo cat /home/docker/cp-test_multinode-152500-m03_multinode-152500.txt": (9.6468388s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt multinode-152500-m02:/home/docker/cp-test_multinode-152500-m03_multinode-152500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 cp multinode-152500-m03:/home/docker/cp-test.txt multinode-152500-m02:/home/docker/cp-test_multinode-152500-m03_multinode-152500-m02.txt: (17.0402689s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m03 "sudo cat /home/docker/cp-test.txt": (9.6739045s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test_multinode-152500-m03_multinode-152500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 ssh -n multinode-152500-m02 "sudo cat /home/docker/cp-test_multinode-152500-m03_multinode-152500-m02.txt": (9.6919644s)
--- PASS: TestMultiNode/serial/CopyFile (371.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (78.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 node stop m03: (25.4751567s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-152500 status: exit status 7 (26.7369651s)

                                                
                                                
-- stdout --
	multinode-152500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-152500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-152500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:21:47.653088    4676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-152500 status --alsologtostderr: exit status 7 (26.6795143s)

                                                
                                                
-- stdout --
	multinode-152500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-152500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-152500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:22:14.397790    3912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0421 20:22:14.486798    3912 out.go:291] Setting OutFile to fd 676 ...
	I0421 20:22:14.486798    3912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:22:14.486798    3912 out.go:304] Setting ErrFile to fd 892...
	I0421 20:22:14.486798    3912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 20:22:14.501785    3912 out.go:298] Setting JSON to false
	I0421 20:22:14.502787    3912 mustload.go:65] Loading cluster: multinode-152500
	I0421 20:22:14.502787    3912 notify.go:220] Checking for updates...
	I0421 20:22:14.502787    3912 config.go:182] Loaded profile config "multinode-152500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 20:22:14.502787    3912 status.go:255] checking status of multinode-152500 ...
	I0421 20:22:14.504776    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:22:16.710278    3912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:22:16.710348    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:16.710412    3912 status.go:330] multinode-152500 host status = "Running" (err=<nil>)
	I0421 20:22:16.710412    3912 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:22:16.710665    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:22:18.923293    3912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:22:18.923293    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:18.923947    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:22:21.533457    3912 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:22:21.533541    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:21.533541    3912 host.go:66] Checking if "multinode-152500" exists ...
	I0421 20:22:21.547473    3912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 20:22:21.547473    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500 ).state
	I0421 20:22:23.712517    3912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:22:23.712517    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:23.712746    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500 ).networkadapters[0]).ipaddresses[0]
	I0421 20:22:26.380489    3912 main.go:141] libmachine: [stdout =====>] : 172.27.198.190
	
	I0421 20:22:26.380489    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:26.381114    3912 sshutil.go:53] new ssh client: &{IP:172.27.198.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500\id_rsa Username:docker}
	I0421 20:22:26.485218    3912 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9377095s)
	I0421 20:22:26.500547    3912 ssh_runner.go:195] Run: systemctl --version
	I0421 20:22:26.525588    3912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:22:26.557722    3912 kubeconfig.go:125] found "multinode-152500" server: "https://172.27.198.190:8443"
	I0421 20:22:26.557754    3912 api_server.go:166] Checking apiserver status ...
	I0421 20:22:26.571264    3912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0421 20:22:26.616955    3912 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2040/cgroup
	W0421 20:22:26.635810    3912 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2040/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0421 20:22:26.650815    3912 ssh_runner.go:195] Run: ls
	I0421 20:22:26.658288    3912 api_server.go:253] Checking apiserver healthz at https://172.27.198.190:8443/healthz ...
	I0421 20:22:26.670003    3912 api_server.go:279] https://172.27.198.190:8443/healthz returned 200:
	ok
	I0421 20:22:26.670003    3912 status.go:422] multinode-152500 apiserver status = Running (err=<nil>)
	I0421 20:22:26.670003    3912 status.go:257] multinode-152500 status: &{Name:multinode-152500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0421 20:22:26.670530    3912 status.go:255] checking status of multinode-152500-m02 ...
	I0421 20:22:26.671343    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:22:28.912981    3912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:22:28.912981    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:28.913149    3912 status.go:330] multinode-152500-m02 host status = "Running" (err=<nil>)
	I0421 20:22:28.913149    3912 host.go:66] Checking if "multinode-152500-m02" exists ...
	I0421 20:22:28.913980    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:22:31.169760    3912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:22:31.169760    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:31.169760    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:22:33.832770    3912 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:22:33.832808    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:33.832808    3912 host.go:66] Checking if "multinode-152500-m02" exists ...
	I0421 20:22:33.846572    3912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0421 20:22:33.847604    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m02 ).state
	I0421 20:22:36.014326    3912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0421 20:22:36.015260    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:36.015260    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-152500-m02 ).networkadapters[0]).ipaddresses[0]
	I0421 20:22:38.613283    3912 main.go:141] libmachine: [stdout =====>] : 172.27.195.108
	
	I0421 20:22:38.613957    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:38.614031    3912 sshutil.go:53] new ssh client: &{IP:172.27.195.108 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-152500-m02\id_rsa Username:docker}
	I0421 20:22:38.731148    3912 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8835085s)
	I0421 20:22:38.745659    3912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0421 20:22:38.773314    3912 status.go:257] multinode-152500-m02 status: &{Name:multinode-152500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0421 20:22:38.773374    3912 status.go:255] checking status of multinode-152500-m03 ...
	I0421 20:22:38.774068    3912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-152500-m03 ).state
	I0421 20:22:40.908563    3912 main.go:141] libmachine: [stdout =====>] : Off
	
	I0421 20:22:40.908563    3912 main.go:141] libmachine: [stderr =====>] : 
	I0421 20:22:40.908884    3912 status.go:330] multinode-152500-m03 host status = "Stopped" (err=<nil>)
	I0421 20:22:40.908884    3912 status.go:343] host is not running, skipping remaining checks
	I0421 20:22:40.908884    3912 status.go:257] multinode-152500-m03 status: &{Name:multinode-152500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (78.89s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (188.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 node start m03 -v=7 --alsologtostderr
E0421 20:22:54.265768   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 node start m03 -v=7 --alsologtostderr: (2m32.0029803s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-152500 status -v=7 --alsologtostderr
E0421 20:25:36.923549   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-152500 status -v=7 --alsologtostderr: (36.3443736s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (188.54s)

                                                
                                    
x
+
TestPreload (543.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-504400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0421 20:37:54.265083   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-504400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m38.4695682s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-504400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-504400 image pull gcr.io/k8s-minikube/busybox: (8.8337273s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-504400
E0421 20:40:36.927506   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-504400: (40.548793s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-504400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0421 20:42:37.497549   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 20:42:54.274016   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-504400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m44.2728126s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-504400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-504400 image list: (7.4674789s)
helpers_test.go:175: Cleaning up "test-preload-504400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-504400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-504400: (43.5593539s)
--- PASS: TestPreload (543.16s)

                                                
                                    
x
+
TestScheduledStopWindows (337.62s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-043800 --memory=2048 --driver=hyperv
E0421 20:45:36.942897   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 20:47:54.267682   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-043800 --memory=2048 --driver=hyperv: (3m23.364594s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-043800 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-043800 --schedule 5m: (11.0264127s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-043800 -n scheduled-stop-043800
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-043800 -n scheduled-stop-043800: exit status 1 (10.0211843s)

                                                
                                                
** stderr ** 
	W0421 20:48:25.859151    9388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-043800 -- sudo systemctl show minikube-scheduled-stop --no-page
E0421 20:48:40.154363   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-043800 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.7867741s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-043800 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-043800 --schedule 5s: (11.0159736s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-043800
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-043800: exit status 7 (2.4980368s)

                                                
                                                
-- stdout --
	scheduled-stop-043800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:49:56.701742    7048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-043800 -n scheduled-stop-043800
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-043800 -n scheduled-stop-043800: exit status 7 (2.4818603s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:49:59.199902   10516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-043800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-043800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-043800: (27.4092635s)
--- PASS: TestScheduledStopWindows (337.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1123.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3090126769.exe start -p running-upgrade-043400 --memory=2200 --vm-driver=hyperv
E0421 20:50:36.938406   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
E0421 20:52:54.278748   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3090126769.exe start -p running-upgrade-043400 --memory=2200 --vm-driver=hyperv: (8m21.899591s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-043400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0421 20:59:17.507488   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-808300\client.crt: The system cannot find the path specified.
E0421 21:00:36.940975   13800 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-519700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-043400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (9m6.4981283s)
helpers_test.go:175: Cleaning up "running-upgrade-043400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-043400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-043400: (1m13.7304925s)
--- PASS: TestRunningBinaryUpgrade (1123.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-043400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-043400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (380.7516ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-043400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 20:50:29.113055    5312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    

Test skip (30/197)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-808300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-808300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 7340: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-808300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0379171s)

                                                
                                                
-- stdout --
	* [functional-808300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:55:17.354921   14172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0421 18:55:17.464951   14172 out.go:291] Setting OutFile to fd 364 ...
	I0421 18:55:17.465888   14172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:55:17.465888   14172 out.go:304] Setting ErrFile to fd 692...
	I0421 18:55:17.465888   14172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:55:17.490886   14172 out.go:298] Setting JSON to false
	I0421 18:55:17.495698   14172 start.go:129] hostinfo: {"hostname":"minikube6","uptime":11592,"bootTime":1713714124,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 18:55:17.495873   14172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 18:55:17.499585   14172 out.go:177] * [functional-808300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 18:55:17.506349   14172 notify.go:220] Checking for updates...
	I0421 18:55:17.508911   14172 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:55:17.511937   14172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:55:17.518165   14172 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 18:55:17.521082   14172 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:55:17.523727   14172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:55:17.528334   14172 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 18:55:17.529953   14172 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-808300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-808300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0452467s)

                                                
                                                
-- stdout --
	* [functional-808300] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18702
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0421 18:55:22.408618    2612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0421 18:55:22.502591    2612 out.go:291] Setting OutFile to fd 508 ...
	I0421 18:55:22.503585    2612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:55:22.503585    2612 out.go:304] Setting ErrFile to fd 824...
	I0421 18:55:22.503585    2612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0421 18:55:22.540596    2612 out.go:298] Setting JSON to false
	I0421 18:55:22.546611    2612 start.go:129] hostinfo: {"hostname":"minikube6","uptime":11597,"bootTime":1713714124,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0421 18:55:22.546611    2612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0421 18:55:22.553597    2612 out.go:177] * [functional-808300] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0421 18:55:22.556614    2612 notify.go:220] Checking for updates...
	I0421 18:55:22.560594    2612 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0421 18:55:22.564109    2612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0421 18:55:22.566608    2612 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0421 18:55:22.568673    2612 out.go:177]   - MINIKUBE_LOCATION=18702
	I0421 18:55:22.572590    2612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0421 18:55:22.576586    2612 config.go:182] Loaded profile config "functional-808300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0421 18:55:22.577586    2612 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard